May 15 00:28:36.887767 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:28:36.887788 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 14 22:53:13 -00 2025 May 15 00:28:36.887798 kernel: KASLR enabled May 15 00:28:36.887804 kernel: efi: EFI v2.7 by EDK II May 15 00:28:36.887809 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 15 00:28:36.887815 kernel: random: crng init done May 15 00:28:36.887822 kernel: ACPI: Early table checksum verification disabled May 15 00:28:36.887828 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 15 00:28:36.887834 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:28:36.887842 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887858 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887865 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887871 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887877 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887884 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887893 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887899 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887906 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:36.887912 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:28:36.887919 kernel: NUMA: Failed to initialise from firmware May 15 00:28:36.887925 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:36.887931 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 00:28:36.887938 kernel: Zone ranges: May 15 00:28:36.887944 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:36.887950 kernel: DMA32 empty May 15 00:28:36.887958 kernel: Normal empty May 15 00:28:36.887964 kernel: Movable zone start for each node May 15 00:28:36.887970 kernel: Early memory node ranges May 15 00:28:36.887977 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 00:28:36.887984 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:28:36.887990 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:28:36.887997 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:28:36.888003 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:28:36.888009 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:28:36.888015 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:28:36.888022 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:36.888028 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:28:36.888035 kernel: psci: probing for conduit method from ACPI. May 15 00:28:36.888042 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:28:36.888049 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:28:36.888058 kernel: psci: Trusted OS migration not required May 15 00:28:36.888065 kernel: psci: SMC Calling Convention v1.1 May 15 00:28:36.888072 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:28:36.888080 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:28:36.888087 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:28:36.888093 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:28:36.888100 kernel: Detected PIPT I-cache on CPU0 May 15 00:28:36.888107 kernel: CPU features: detected: GIC system register CPU interface May 15 00:28:36.888113 kernel: CPU features: detected: Hardware dirty bit management May 15 00:28:36.888120 kernel: CPU features: detected: Spectre-v4 May 15 00:28:36.888127 kernel: CPU features: detected: Spectre-BHB May 15 00:28:36.888133 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:28:36.888140 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:28:36.888148 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:28:36.888155 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:28:36.888162 kernel: alternatives: applying boot alternatives May 15 00:28:36.888169 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:28:36.888177 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:28:36.888183 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:28:36.888190 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:28:36.888197 kernel: Fallback order for Node 0: 0 May 15 00:28:36.888204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:28:36.888210 kernel: Policy zone: DMA May 15 00:28:36.888217 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:28:36.888267 kernel: software IO TLB: area num 4. May 15 00:28:36.888274 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:28:36.888282 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 15 00:28:36.888289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:28:36.888296 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:28:36.888303 kernel: rcu: RCU event tracing is enabled. May 15 00:28:36.888310 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:28:36.888318 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:28:36.888325 kernel: Tracing variant of Tasks RCU enabled. May 15 00:28:36.888335 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:28:36.888349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:28:36.888365 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:28:36.888376 kernel: GICv3: 256 SPIs implemented May 15 00:28:36.888383 kernel: GICv3: 0 Extended SPIs implemented May 15 00:28:36.888390 kernel: Root IRQ handler: gic_handle_irq May 15 00:28:36.888397 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:28:36.888404 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:28:36.888411 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:28:36.888418 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:28:36.888425 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:28:36.888433 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:28:36.888440 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:28:36.888447 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:28:36.888456 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:36.888464 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:28:36.888474 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:28:36.888483 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:28:36.888493 kernel: arm-pv: using stolen time PV May 15 00:28:36.888500 kernel: Console: colour dummy device 80x25 May 15 00:28:36.888507 kernel: ACPI: Core revision 20230628 May 15 00:28:36.888514 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:28:36.888522 kernel: pid_max: default: 32768 minimum: 301 May 15 00:28:36.888528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:28:36.888537 kernel: landlock: Up and running. May 15 00:28:36.888543 kernel: SELinux: Initializing. May 15 00:28:36.888550 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:28:36.888557 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:28:36.888564 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:28:36.888572 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:28:36.888579 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:28:36.888586 kernel: rcu: Hierarchical SRCU implementation. May 15 00:28:36.888593 kernel: rcu: Max phase no-delay instances is 400. May 15 00:28:36.888601 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:28:36.888608 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:28:36.888615 kernel: Remapping and enabling EFI services. May 15 00:28:36.888622 kernel: smp: Bringing up secondary CPUs ... May 15 00:28:36.888628 kernel: Detected PIPT I-cache on CPU1 May 15 00:28:36.888635 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:28:36.888642 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:28:36.888649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:36.888656 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:28:36.888664 kernel: Detected PIPT I-cache on CPU2 May 15 00:28:36.888671 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:28:36.888678 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:28:36.888690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:36.888698 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:28:36.888705 kernel: Detected PIPT I-cache on CPU3 May 15 00:28:36.888712 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:28:36.888720 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:28:36.888727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:36.888734 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:28:36.888741 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:28:36.888750 kernel: SMP: Total of 4 processors activated. May 15 00:28:36.888757 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:28:36.888764 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:28:36.888771 kernel: CPU features: detected: Common not Private translations May 15 00:28:36.888778 kernel: CPU features: detected: CRC32 instructions May 15 00:28:36.888785 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:28:36.888794 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:28:36.888801 kernel: CPU features: detected: LSE atomic instructions May 15 00:28:36.888809 kernel: CPU features: detected: Privileged Access Never May 15 00:28:36.888816 kernel: CPU features: detected: RAS Extension Support May 15 00:28:36.888823 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:28:36.888830 kernel: CPU: All CPU(s) started at EL1 May 15 00:28:36.888837 kernel: alternatives: applying system-wide alternatives May 15 00:28:36.888844 kernel: devtmpfs: initialized May 15 00:28:36.888857 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:28:36.888864 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:28:36.888874 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:28:36.888881 kernel: SMBIOS 3.0.0 present. May 15 00:28:36.888889 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 15 00:28:36.888896 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:28:36.888903 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:28:36.888911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:28:36.888918 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:28:36.888925 kernel: audit: initializing netlink subsys (disabled) May 15 00:28:36.888933 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 15 00:28:36.888941 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:28:36.888948 kernel: cpuidle: using governor menu May 15 00:28:36.888955 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:28:36.888963 kernel: ASID allocator initialised with 32768 entries May 15 00:28:36.888970 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:28:36.888977 kernel: Serial: AMBA PL011 UART driver May 15 00:28:36.888985 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:28:36.888992 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:28:36.888999 kernel: Modules: 509008 pages in range for PLT usage May 15 00:28:36.889008 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:28:36.889015 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:28:36.889023 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:28:36.889030 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:28:36.889037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:28:36.889045 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:28:36.889052 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:28:36.889059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:28:36.889066 kernel: ACPI: Added _OSI(Module Device) May 15 00:28:36.889074 kernel: ACPI: Added _OSI(Processor Device) May 15 00:28:36.889082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:28:36.889089 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:28:36.889096 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:28:36.889103 kernel: ACPI: Interpreter enabled May 15 00:28:36.889110 kernel: ACPI: Using GIC for interrupt routing May 15 00:28:36.889117 kernel: ACPI: MCFG table detected, 1 entries May 15 00:28:36.889125 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:28:36.889132 kernel: printk: console [ttyAMA0] enabled May 15 00:28:36.889140 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:28:36.889290 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:28:36.889371 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:28:36.889439 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:28:36.889503 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:28:36.889566 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:28:36.889576 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:28:36.889586 kernel: PCI host bridge to bus 0000:00 May 15 00:28:36.889657 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:28:36.889716 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:28:36.889775 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:28:36.889833 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:28:36.889926 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:28:36.890008 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:28:36.890080 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:28:36.890152 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:28:36.890233 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:28:36.890306 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:28:36.890372 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:28:36.890437 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:28:36.890516 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:28:36.890584 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:28:36.890642 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:28:36.890652 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:28:36.890659 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:28:36.890667 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:28:36.890674 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:28:36.890681 kernel: iommu: Default domain type: Translated May 15 00:28:36.890688 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:28:36.890698 kernel: efivars: Registered efivars operations May 15 00:28:36.890705 kernel: vgaarb: loaded May 15 00:28:36.890712 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:28:36.890719 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:28:36.890727 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:28:36.890734 kernel: pnp: PnP ACPI init May 15 00:28:36.890805 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:28:36.890816 kernel: pnp: PnP ACPI: found 1 devices May 15 00:28:36.890825 kernel: NET: Registered PF_INET protocol family May 15 00:28:36.890832 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:28:36.890839 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:28:36.890854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:28:36.890862 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:28:36.890869 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:28:36.890877 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:28:36.890884 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:28:36.890891 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:28:36.890901 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:28:36.890908 kernel: PCI: CLS 0 bytes, default 64 May 15 00:28:36.890915 kernel: kvm [1]: HYP mode not available May 15 00:28:36.890922 kernel: Initialise system trusted keyrings May 15 00:28:36.890930 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:28:36.890937 kernel: Key type asymmetric registered May 15 00:28:36.890944 kernel: Asymmetric key parser 'x509' registered May 15 00:28:36.890951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:28:36.890959 kernel: io scheduler mq-deadline registered May 15 00:28:36.890967 kernel: io scheduler kyber registered May 15 00:28:36.890974 kernel: io scheduler bfq registered May 15 00:28:36.890982 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:28:36.890989 kernel: ACPI: button: Power Button [PWRB] May 15 00:28:36.890997 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:28:36.891068 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:28:36.891078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:28:36.891085 kernel: thunder_xcv, ver 1.0 May 15 00:28:36.891093 kernel: thunder_bgx, ver 1.0 May 15 00:28:36.891102 kernel: nicpf, ver 1.0 May 15 00:28:36.891109 kernel: nicvf, ver 1.0 May 15 00:28:36.891185 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:28:36.891275 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:28:36 UTC (1747268916) May 15 00:28:36.891286 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:28:36.891293 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:28:36.891301 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:28:36.891308 kernel: watchdog: Hard watchdog permanently disabled May 15 00:28:36.891318 kernel: NET: Registered PF_INET6 protocol family May 15 00:28:36.891325 kernel: Segment Routing with IPv6 May 15 00:28:36.891333 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:28:36.891340 kernel: NET: Registered PF_PACKET protocol family May 15 00:28:36.891347 kernel: Key type dns_resolver registered May 15 00:28:36.891355 kernel: registered taskstats version 1 May 15 00:28:36.891362 kernel: Loading compiled-in X.509 certificates May 15 00:28:36.891369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 6afb3c096bffb4980a4bcc170ebe3729821d8e0d' May 15 00:28:36.891376 kernel: Key type .fscrypt registered May 15 00:28:36.891385 kernel: Key type fscrypt-provisioning registered May 15 00:28:36.891393 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:28:36.891400 kernel: ima: Allocated hash algorithm: sha1 May 15 00:28:36.891407 kernel: ima: No architecture policies found May 15 00:28:36.891414 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:28:36.891422 kernel: clk: Disabling unused clocks May 15 00:28:36.891429 kernel: Freeing unused kernel memory: 39424K May 15 00:28:36.891436 kernel: Run /init as init process May 15 00:28:36.891443 kernel: with arguments: May 15 00:28:36.891452 kernel: /init May 15 00:28:36.891459 kernel: with environment: May 15 00:28:36.891466 kernel: HOME=/ May 15 00:28:36.891473 kernel: TERM=linux May 15 00:28:36.891480 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:28:36.891489 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:28:36.891499 systemd[1]: Detected virtualization kvm. May 15 00:28:36.891507 systemd[1]: Detected architecture arm64. May 15 00:28:36.891516 systemd[1]: Running in initrd. May 15 00:28:36.891523 systemd[1]: No hostname configured, using default hostname. May 15 00:28:36.891531 systemd[1]: Hostname set to . May 15 00:28:36.891539 systemd[1]: Initializing machine ID from VM UUID. May 15 00:28:36.891547 systemd[1]: Queued start job for default target initrd.target. May 15 00:28:36.891554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:36.891562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:36.891571 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:28:36.891580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:28:36.891588 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:28:36.891596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:28:36.891605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:28:36.891614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:28:36.891622 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:36.891631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:36.891639 systemd[1]: Reached target paths.target - Path Units. May 15 00:28:36.891647 systemd[1]: Reached target slices.target - Slice Units. May 15 00:28:36.891655 systemd[1]: Reached target swap.target - Swaps. May 15 00:28:36.891663 systemd[1]: Reached target timers.target - Timer Units. May 15 00:28:36.891671 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:28:36.891678 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:28:36.891686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:28:36.891694 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:28:36.891704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:36.891711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:28:36.891719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:36.891727 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:28:36.891735 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:28:36.891743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:28:36.891751 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:28:36.891759 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:28:36.891766 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:28:36.891776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:28:36.891784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:36.891792 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:28:36.891800 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:36.891808 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:28:36.891835 systemd-journald[238]: Collecting audit messages is disabled. May 15 00:28:36.891862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:28:36.891871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:36.891881 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:28:36.891889 systemd-journald[238]: Journal started May 15 00:28:36.891908 systemd-journald[238]: Runtime Journal (/run/log/journal/30ebc9fea07946b28b4d904b6c6e2ff5) is 5.9M, max 47.3M, 41.4M free. May 15 00:28:36.876944 systemd-modules-load[239]: Inserted module 'overlay' May 15 00:28:36.893741 kernel: Bridge firewalling registered May 15 00:28:36.893759 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:28:36.892510 systemd-modules-load[239]: Inserted module 'br_netfilter' May 15 00:28:36.894877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:28:36.897284 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:28:36.911401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:36.913106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:28:36.915393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:28:36.917990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:28:36.926200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:36.927385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:36.930099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:36.932651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:36.943388 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:28:36.945642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:28:36.953190 dracut-cmdline[278]: dracut-dracut-053 May 15 00:28:36.955713 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:28:36.975634 systemd-resolved[280]: Positive Trust Anchors: May 15 00:28:36.975654 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:28:36.975685 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:28:36.980712 systemd-resolved[280]: Defaulting to hostname 'linux'. May 15 00:28:36.981734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:28:36.984499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:37.021249 kernel: SCSI subsystem initialized May 15 00:28:37.025234 kernel: Loading iSCSI transport class v2.0-870. May 15 00:28:37.032253 kernel: iscsi: registered transport (tcp) May 15 00:28:37.045273 kernel: iscsi: registered transport (qla4xxx) May 15 00:28:37.045316 kernel: QLogic iSCSI HBA Driver May 15 00:28:37.086886 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:28:37.094375 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:28:37.111494 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:28:37.111561 kernel: device-mapper: uevent: version 1.0.3 May 15 00:28:37.111582 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:28:37.157252 kernel: raid6: neonx8 gen() 15593 MB/s May 15 00:28:37.174242 kernel: raid6: neonx4 gen() 15616 MB/s May 15 00:28:37.191264 kernel: raid6: neonx2 gen() 13142 MB/s May 15 00:28:37.208249 kernel: raid6: neonx1 gen() 10434 MB/s May 15 00:28:37.225236 kernel: raid6: int64x8 gen() 6925 MB/s May 15 00:28:37.242236 kernel: raid6: int64x4 gen() 7327 MB/s May 15 00:28:37.259236 kernel: raid6: int64x2 gen() 6114 MB/s May 15 00:28:37.276237 kernel: raid6: int64x1 gen() 5024 MB/s May 15 00:28:37.276251 kernel: raid6: using algorithm neonx4 gen() 15616 MB/s May 15 00:28:37.293242 kernel: raid6: .... xor() 12290 MB/s, rmw enabled May 15 00:28:37.293256 kernel: raid6: using neon recovery algorithm May 15 00:28:37.298277 kernel: xor: measuring software checksum speed May 15 00:28:37.298293 kernel: 8regs : 19110 MB/sec May 15 00:28:37.299295 kernel: 32regs : 19683 MB/sec May 15 00:28:37.299309 kernel: arm64_neon : 26213 MB/sec May 15 00:28:37.299332 kernel: xor: using function: arm64_neon (26213 MB/sec) May 15 00:28:37.355253 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:28:37.369280 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:28:37.377363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:37.389077 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 00:28:37.392252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:37.403372 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:28:37.415462 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 15 00:28:37.439736 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:28:37.453397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:28:37.492723 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:37.503596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:28:37.515718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:28:37.516890 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:28:37.518091 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:37.520839 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:28:37.529403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:28:37.532758 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:28:37.535288 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:28:37.535424 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:28:37.535436 kernel: GPT:9289727 != 19775487 May 15 00:28:37.535445 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:28:37.536517 kernel: GPT:9289727 != 19775487 May 15 00:28:37.536546 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:28:37.537392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:37.539850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:28:37.551648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:28:37.551772 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:37.557096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:37.562661 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) May 15 00:28:37.562685 kernel: BTRFS: device fsid c82d3215-8134-4516-8c53-9d29a8823a8c devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (516) May 15 00:28:37.559749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:28:37.559921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:37.561871 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:37.573471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:37.584577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:37.589264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:28:37.593556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:28:37.597898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:28:37.601495 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:28:37.602430 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:28:37.615431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:28:37.617110 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:37.623494 disk-uuid[550]: Primary Header is updated. May 15 00:28:37.623494 disk-uuid[550]: Secondary Entries is updated. May 15 00:28:37.623494 disk-uuid[550]: Secondary Header is updated. May 15 00:28:37.628240 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:37.635632 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:38.641249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:38.641551 disk-uuid[552]: The operation has completed successfully. May 15 00:28:38.661782 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:28:38.661886 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:28:38.681360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:28:38.684050 sh[572]: Success May 15 00:28:38.697244 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:28:38.724193 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:28:38.736373 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:28:38.740261 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:28:38.749916 kernel: BTRFS info (device dm-0): first mount of filesystem c82d3215-8134-4516-8c53-9d29a8823a8c May 15 00:28:38.749947 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:38.749958 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:28:38.749968 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:28:38.751241 kernel: BTRFS info (device dm-0): using free space tree May 15 00:28:38.754345 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:28:38.755462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:28:38.762368 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:28:38.764420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:28:38.770880 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:38.770922 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:38.770932 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:38.773266 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:38.780067 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:28:38.781780 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:38.786331 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:28:38.792383 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:28:38.856719 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:28:38.868378 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:28:38.887690 ignition[660]: Ignition 2.19.0 May 15 00:28:38.887699 ignition[660]: Stage: fetch-offline May 15 00:28:38.887747 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:38.887755 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:38.887972 ignition[660]: parsed url from cmdline: "" May 15 00:28:38.887976 ignition[660]: no config URL provided May 15 00:28:38.887980 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:28:38.887988 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 15 00:28:38.888009 ignition[660]: op(1): [started] loading QEMU firmware config module May 15 00:28:38.888014 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:28:38.895428 ignition[660]: op(1): [finished] loading QEMU firmware config module May 15 00:28:38.895450 ignition[660]: QEMU firmware config was not found. Ignoring... May 15 00:28:38.897600 systemd-networkd[762]: lo: Link UP May 15 00:28:38.897613 systemd-networkd[762]: lo: Gained carrier May 15 00:28:38.898308 systemd-networkd[762]: Enumeration completed May 15 00:28:38.898389 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:28:38.898696 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:38.898699 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:28:38.900473 systemd[1]: Reached target network.target - Network. May 15 00:28:38.900641 systemd-networkd[762]: eth0: Link UP May 15 00:28:38.900645 systemd-networkd[762]: eth0: Gained carrier May 15 00:28:38.900652 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:38.916274 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:28:38.942504 ignition[660]: parsing config with SHA512: 9ad678584cd2c3297eaa1a411c80eb0fae12629a5aecee070b5386edcf2dc3d4bf8851e8b8c9d8c4505959a295e72b874454f594d5664f33a72c78c37545bf13 May 15 00:28:38.948112 unknown[660]: fetched base config from "system" May 15 00:28:38.948123 unknown[660]: fetched user config from "qemu" May 15 00:28:38.948559 ignition[660]: fetch-offline: fetch-offline passed May 15 00:28:38.950292 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:28:38.948617 ignition[660]: Ignition finished successfully May 15 00:28:38.951406 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:28:38.965353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:28:38.975480 ignition[768]: Ignition 2.19.0 May 15 00:28:38.975490 ignition[768]: Stage: kargs May 15 00:28:38.975642 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:38.975650 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:38.976562 ignition[768]: kargs: kargs passed May 15 00:28:38.976605 ignition[768]: Ignition finished successfully May 15 00:28:38.979743 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:28:38.987378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:28:38.996700 ignition[776]: Ignition 2.19.0 May 15 00:28:38.996711 ignition[776]: Stage: disks May 15 00:28:38.996893 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:38.999644 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:28:38.996903 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:39.000595 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:28:38.997788 ignition[776]: disks: disks passed May 15 00:28:39.001990 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:28:38.997830 ignition[776]: Ignition finished successfully May 15 00:28:39.003698 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:28:39.005227 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:28:39.006318 systemd[1]: Reached target basic.target - Basic System. May 15 00:28:39.008521 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:28:39.027129 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:28:39.087217 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:28:39.094369 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:28:39.139232 kernel: EXT4-fs (vda9): mounted filesystem 5a01cbd3-e7cb-4475-87b3-07e348161203 r/w with ordered data mode. Quota mode: none. May 15 00:28:39.139646 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:28:39.140898 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:28:39.158470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:28:39.160643 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:28:39.163345 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:28:39.170190 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795) May 15 00:28:39.170233 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:39.170245 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:39.170254 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:39.170264 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:39.163387 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:28:39.163409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:28:39.168052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:28:39.175468 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:28:39.193416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:28:39.240914 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:28:39.247939 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory May 15 00:28:39.252098 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:28:39.255558 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:28:39.352274 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:28:39.363367 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:28:39.365900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:28:39.370639 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:39.386389 ignition[908]: INFO : Ignition 2.19.0 May 15 00:28:39.386389 ignition[908]: INFO : Stage: mount May 15 00:28:39.387854 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:39.387854 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:39.387854 ignition[908]: INFO : mount: mount passed May 15 00:28:39.387854 ignition[908]: INFO : Ignition finished successfully May 15 00:28:39.388538 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:28:39.389441 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:28:39.400353 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:28:39.748540 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:28:39.758354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:28:39.764363 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) May 15 00:28:39.764405 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:39.764426 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:39.765527 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:39.767238 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:39.768472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:28:39.788602 ignition[939]: INFO : Ignition 2.19.0 May 15 00:28:39.788602 ignition[939]: INFO : Stage: files May 15 00:28:39.790190 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:39.790190 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:39.790190 ignition[939]: DEBUG : files: compiled without relabeling support, skipping May 15 00:28:39.793595 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:28:39.793595 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:28:39.796503 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:28:39.797828 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:28:39.797828 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:28:39.797051 unknown[939]: wrote ssh authorized keys file for user: core May 15 00:28:39.801492 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:28:39.801492 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:28:39.889632 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:28:40.193830 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:28:40.193830 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:28:40.197551 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 00:28:40.506448 systemd-networkd[762]: eth0: Gained IPv6LL May 15 00:28:40.548189 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:28:40.648044 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:40.650040 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 00:28:40.847355 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:28:41.202373 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:41.202373 ignition[939]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:28:41.206043 ignition[939]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:28:41.229183 ignition[939]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:28:41.233085 ignition[939]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:28:41.234347 ignition[939]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:28:41.234347 ignition[939]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 00:28:41.234347 ignition[939]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:28:41.234347 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:28:41.234347 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:28:41.234347 ignition[939]: INFO : files: files passed May 15 00:28:41.234347 ignition[939]: INFO : Ignition finished successfully May 15 00:28:41.237889 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:28:41.260428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:28:41.263331 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:28:41.265008 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:28:41.265090 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:28:41.274905 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:28:41.278273 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:41.278273 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:41.281049 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:41.282173 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:28:41.283358 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:28:41.290449 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:28:41.314398 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:28:41.315412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:28:41.316749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:28:41.318643 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:28:41.320430 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:28:41.321145 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:28:41.337640 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:28:41.351394 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:28:41.360007 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:41.361298 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:41.363420 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:28:41.365198 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:28:41.365350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:28:41.367872 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:28:41.369924 systemd[1]: Stopped target basic.target - Basic System. May 15 00:28:41.371557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:28:41.373292 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:28:41.375253 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:28:41.377305 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:28:41.379175 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:28:41.381142 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:28:41.383103 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:28:41.384800 systemd[1]: Stopped target swap.target - Swaps. May 15 00:28:41.386286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:28:41.386415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:28:41.388731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:41.390613 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:41.392461 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:28:41.394346 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:41.395630 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:28:41.395754 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:28:41.398494 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:28:41.398624 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:28:41.400507 systemd[1]: Stopped target paths.target - Path Units. May 15 00:28:41.402048 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:28:41.404835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:41.406086 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:28:41.408158 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:28:41.409741 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:28:41.409839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:28:41.411404 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:28:41.411489 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:28:41.413128 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:28:41.413249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:28:41.415109 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:28:41.415237 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:28:41.432419 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:28:41.433309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:28:41.433464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:41.440133 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:28:41.440889 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:28:41.443395 ignition[993]: INFO : Ignition 2.19.0 May 15 00:28:41.443395 ignition[993]: INFO : Stage: umount May 15 00:28:41.443395 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:41.443395 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:41.441018 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:41.449359 ignition[993]: INFO : umount: umount passed May 15 00:28:41.449359 ignition[993]: INFO : Ignition finished successfully May 15 00:28:41.442620 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:28:41.442719 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:28:41.447175 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:28:41.447283 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:28:41.450014 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:28:41.450120 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:28:41.451683 systemd[1]: Stopped target network.target - Network. May 15 00:28:41.452871 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:28:41.452943 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:28:41.454714 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:28:41.454752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:28:41.456393 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:28:41.456434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:28:41.458148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:28:41.458194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:28:41.459951 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:28:41.461554 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:28:41.465036 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:28:41.475141 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:28:41.475253 systemd-networkd[762]: eth0: DHCPv6 lease lost May 15 00:28:41.475268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:28:41.477261 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:28:41.477372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:28:41.479896 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:28:41.479941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:41.491411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:28:41.492342 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:28:41.492419 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:28:41.494430 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:28:41.494478 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:41.496323 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:28:41.496369 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:28:41.498509 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:28:41.498554 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:41.500509 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:41.510769 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:28:41.510879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:28:41.523747 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:28:41.523867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:28:41.526033 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:28:41.526178 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:41.528332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:28:41.528389 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:28:41.529308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:28:41.529342 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:41.531067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:28:41.531106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:28:41.533509 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:28:41.533547 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:28:41.535121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:28:41.535160 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:41.537888 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:28:41.537932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:28:41.549379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:28:41.550405 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:28:41.550464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:41.552551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:28:41.552608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:41.557593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:28:41.557685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:28:41.559076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:28:41.560900 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:28:41.571353 systemd[1]: Switching root. May 15 00:28:41.603051 systemd-journald[238]: Journal stopped May 15 00:28:42.316356 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 15 00:28:42.316412 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:28:42.316424 kernel: SELinux: policy capability open_perms=1 May 15 00:28:42.316433 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:28:42.316443 kernel: SELinux: policy capability always_check_network=0 May 15 00:28:42.316456 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:28:42.316465 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:28:42.316475 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:28:42.316484 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:28:42.316493 kernel: audit: type=1403 audit(1747268921.763:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:28:42.316504 systemd[1]: Successfully loaded SELinux policy in 38.986ms. May 15 00:28:42.316520 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.773ms. May 15 00:28:42.316532 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:28:42.316544 systemd[1]: Detected virtualization kvm. May 15 00:28:42.316555 systemd[1]: Detected architecture arm64. May 15 00:28:42.316566 systemd[1]: Detected first boot. May 15 00:28:42.316579 systemd[1]: Initializing machine ID from VM UUID. May 15 00:28:42.316590 zram_generator::config[1040]: No configuration found. May 15 00:28:42.316600 systemd[1]: Populated /etc with preset unit settings. May 15 00:28:42.316614 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:28:42.316625 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:28:42.316635 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:28:42.316647 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:28:42.316658 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:28:42.316669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:28:42.316679 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:28:42.316690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:28:42.316700 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:28:42.316711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:28:42.316722 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:28:42.316734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:42.316745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:42.316755 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:28:42.316766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:28:42.316776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:28:42.316786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:28:42.316797 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:28:42.316809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:42.316819 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:28:42.316838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:28:42.316850 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:28:42.316861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:28:42.316871 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:42.316881 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:28:42.316892 systemd[1]: Reached target slices.target - Slice Units. May 15 00:28:42.316902 systemd[1]: Reached target swap.target - Swaps. May 15 00:28:42.316913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:28:42.316925 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:28:42.316936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:42.316947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:28:42.316957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:42.316968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:28:42.316978 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:28:42.316989 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:28:42.316999 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:28:42.317010 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:28:42.317022 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:28:42.317032 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:28:42.317045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:28:42.317056 systemd[1]: Reached target machines.target - Containers. May 15 00:28:42.317066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:28:42.317077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:42.317087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:28:42.317098 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:28:42.317108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:42.317120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:28:42.317131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:42.317141 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:28:42.317151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:42.317162 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:28:42.317172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:28:42.317183 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:28:42.317193 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:28:42.317206 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:28:42.317216 kernel: fuse: init (API version 7.39) May 15 00:28:42.317328 kernel: loop: module loaded May 15 00:28:42.317340 kernel: ACPI: bus type drm_connector registered May 15 00:28:42.317350 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:28:42.317361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:28:42.317373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:28:42.317384 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:28:42.317394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:28:42.317408 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:28:42.317419 systemd[1]: Stopped verity-setup.service. May 15 00:28:42.317429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:28:42.317458 systemd-journald[1107]: Collecting audit messages is disabled. May 15 00:28:42.317486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:28:42.317497 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:28:42.317508 systemd-journald[1107]: Journal started May 15 00:28:42.317532 systemd-journald[1107]: Runtime Journal (/run/log/journal/30ebc9fea07946b28b4d904b6c6e2ff5) is 5.9M, max 47.3M, 41.4M free. May 15 00:28:42.127536 systemd[1]: Queued start job for default target multi-user.target. May 15 00:28:42.146128 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:28:42.146493 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:28:42.320249 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:28:42.320457 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:28:42.321527 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:28:42.322449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:28:42.324315 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:28:42.325558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:42.326712 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:28:42.326859 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:28:42.327963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:42.328090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:42.329199 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:28:42.329356 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:28:42.330562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:42.330701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:42.331811 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:28:42.331957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:28:42.333015 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:42.333144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:42.334282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:28:42.335438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:28:42.336576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:28:42.347820 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:28:42.360344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:28:42.362338 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:28:42.363437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:28:42.363472 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:28:42.365446 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:28:42.367384 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:28:42.369144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:28:42.370295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:42.372411 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:28:42.374289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:28:42.375491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:28:42.377442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:28:42.378379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:28:42.380057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:28:42.383673 systemd-journald[1107]: Time spent on flushing to /var/log/journal/30ebc9fea07946b28b4d904b6c6e2ff5 is 13.237ms for 858 entries. May 15 00:28:42.383673 systemd-journald[1107]: System Journal (/var/log/journal/30ebc9fea07946b28b4d904b6c6e2ff5) is 8.0M, max 195.6M, 187.6M free. May 15 00:28:42.404003 systemd-journald[1107]: Received client request to flush runtime journal. May 15 00:28:42.404039 kernel: loop0: detected capacity change from 0 to 189592 May 15 00:28:42.385427 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:28:42.388430 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:28:42.390663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:42.391940 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:28:42.392920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:28:42.394033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:28:42.396388 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:28:42.400009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:28:42.406153 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:28:42.411420 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:28:42.412622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:28:42.425840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:42.429403 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:28:42.436122 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:28:42.439055 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:28:42.440627 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:28:42.457579 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:28:42.464433 kernel: loop1: detected capacity change from 0 to 114432 May 15 00:28:42.471210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:28:42.488718 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 15 00:28:42.488735 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 15 00:28:42.493129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:42.508356 kernel: loop2: detected capacity change from 0 to 114328 May 15 00:28:42.532737 kernel: loop3: detected capacity change from 0 to 189592 May 15 00:28:42.537298 kernel: loop4: detected capacity change from 0 to 114432 May 15 00:28:42.542256 kernel: loop5: detected capacity change from 0 to 114328 May 15 00:28:42.545647 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:28:42.546064 (sd-merge)[1178]: Merged extensions into '/usr'. May 15 00:28:42.549480 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:28:42.549500 systemd[1]: Reloading... May 15 00:28:42.604252 zram_generator::config[1207]: No configuration found. May 15 00:28:42.698375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:42.702926 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:28:42.734916 systemd[1]: Reloading finished in 185 ms. May 15 00:28:42.767511 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:28:42.768639 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:28:42.778423 systemd[1]: Starting ensure-sysext.service... May 15 00:28:42.780282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:28:42.790956 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... May 15 00:28:42.790976 systemd[1]: Reloading... May 15 00:28:42.799042 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:28:42.799652 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:28:42.800454 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:28:42.800778 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 15 00:28:42.800928 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 15 00:28:42.803157 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:28:42.803287 systemd-tmpfiles[1239]: Skipping /boot May 15 00:28:42.810151 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:28:42.810252 systemd-tmpfiles[1239]: Skipping /boot May 15 00:28:42.835249 zram_generator::config[1264]: No configuration found. May 15 00:28:42.924828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:42.961122 systemd[1]: Reloading finished in 169 ms. May 15 00:28:42.975385 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:28:42.989750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:42.997525 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:28:43.000262 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:28:43.002652 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:28:43.006718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:28:43.011093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:43.017508 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:28:43.021324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:43.022489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:43.027486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:43.033548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:43.034456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:43.036038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:28:43.037756 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:28:43.040771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:43.040921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:43.042274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:43.042401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:43.044185 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:43.044722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:43.045717 systemd-udevd[1308]: Using default interface naming scheme 'v255'. May 15 00:28:43.051039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:43.065068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:43.070607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:43.073518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:43.074652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:43.076004 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:28:43.078071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:43.079672 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:28:43.082236 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:28:43.087391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:43.087534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:43.089033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:43.089166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:43.092416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:28:43.104318 systemd[1]: Finished ensure-sysext.service. May 15 00:28:43.111451 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:43.111598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:43.113287 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:28:43.117566 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:28:43.117665 augenrules[1365]: No rules May 15 00:28:43.118276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:43.133301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1340) May 15 00:28:43.129487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:43.133874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:28:43.138448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:43.140197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:43.144425 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:28:43.150054 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:28:43.150999 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:28:43.151487 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:28:43.154941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:43.155073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:43.156630 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:28:43.156778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:28:43.158136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:43.158384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:43.172632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:28:43.172700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:28:43.183497 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:28:43.185181 systemd-resolved[1306]: Positive Trust Anchors: May 15 00:28:43.185450 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:28:43.185529 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:28:43.191404 systemd-resolved[1306]: Defaulting to hostname 'linux'. May 15 00:28:43.193415 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:28:43.194614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:28:43.196493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:43.215836 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:28:43.230127 systemd-networkd[1377]: lo: Link UP May 15 00:28:43.230136 systemd-networkd[1377]: lo: Gained carrier May 15 00:28:43.234606 systemd-networkd[1377]: Enumeration completed May 15 00:28:43.234714 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:28:43.235318 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:43.235321 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:28:43.236029 systemd-networkd[1377]: eth0: Link UP May 15 00:28:43.236038 systemd-networkd[1377]: eth0: Gained carrier May 15 00:28:43.236052 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:43.236295 systemd[1]: Reached target network.target - Network. May 15 00:28:43.244405 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:28:43.246390 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:28:43.247402 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:28:43.248297 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:28:43.255307 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. May 15 00:28:43.256075 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:28:43.256127 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-05-15 00:28:42.887698 UTC. May 15 00:28:43.274462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:43.282655 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:28:43.285639 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:28:43.309953 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:28:43.325308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:43.341273 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:28:43.342885 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:43.344393 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:28:43.345526 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:28:43.346756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:28:43.348188 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:28:43.349418 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:28:43.350770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:28:43.351977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:28:43.352016 systemd[1]: Reached target paths.target - Path Units. May 15 00:28:43.352683 systemd[1]: Reached target timers.target - Timer Units. May 15 00:28:43.355319 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:28:43.357921 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:28:43.370325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:28:43.372767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:28:43.374454 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:28:43.375660 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:28:43.376634 systemd[1]: Reached target basic.target - Basic System. May 15 00:28:43.377621 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:28:43.377654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:28:43.378645 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:28:43.380745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:28:43.381377 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:28:43.384146 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:28:43.389491 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:28:43.390267 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:28:43.392945 jq[1408]: false May 15 00:28:43.393422 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:28:43.399371 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:28:43.401570 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:28:43.404577 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:28:43.408001 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:28:43.412541 extend-filesystems[1409]: Found loop3 May 15 00:28:43.413401 extend-filesystems[1409]: Found loop4 May 15 00:28:43.413401 extend-filesystems[1409]: Found loop5 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda May 15 00:28:43.413401 extend-filesystems[1409]: Found vda1 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda2 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda3 May 15 00:28:43.413401 extend-filesystems[1409]: Found usr May 15 00:28:43.413401 extend-filesystems[1409]: Found vda4 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda6 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda7 May 15 00:28:43.413401 extend-filesystems[1409]: Found vda9 May 15 00:28:43.413401 extend-filesystems[1409]: Checking size of /dev/vda9 May 15 00:28:43.424169 dbus-daemon[1407]: [system] SELinux support is enabled May 15 00:28:43.420807 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:28:43.421323 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:28:43.422194 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:28:43.427369 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:28:43.431367 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:28:43.434429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:28:43.436601 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:28:43.436752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:28:43.438621 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:28:43.438797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:28:43.447245 jq[1427]: true May 15 00:28:43.447537 extend-filesystems[1409]: Resized partition /dev/vda9 May 15 00:28:43.448646 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:28:43.448679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:28:43.452637 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:28:43.452673 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:28:43.457961 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:28:43.459269 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:28:43.469885 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) May 15 00:28:43.472985 jq[1438]: true May 15 00:28:43.488237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1348) May 15 00:28:43.488301 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:28:43.486560 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:28:43.491556 update_engine[1422]: I20250515 00:28:43.488922 1422 main.cc:92] Flatcar Update Engine starting May 15 00:28:43.491882 tar[1431]: linux-arm64/helm May 15 00:28:43.503271 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:28:43.520369 update_engine[1422]: I20250515 00:28:43.503692 1422 update_check_scheduler.cc:74] Next update check in 7m38s May 15 00:28:43.506012 systemd[1]: Started update-engine.service - Update Engine. May 15 00:28:43.517436 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:28:43.520875 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:28:43.521491 systemd-logind[1417]: New seat seat0. May 15 00:28:43.522056 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:28:43.522056 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:28:43.522056 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:28:43.535195 extend-filesystems[1409]: Resized filesystem in /dev/vda9 May 15 00:28:43.527826 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:28:43.528029 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:28:43.533651 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:28:43.569959 bash[1463]: Updated "/home/core/.ssh/authorized_keys" May 15 00:28:43.573333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:28:43.575499 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:28:43.580111 locksmithd[1449]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:28:43.713945 containerd[1443]: time="2025-05-15T00:28:43.713706880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 15 00:28:43.743677 containerd[1443]: time="2025-05-15T00:28:43.743531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746507440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746571240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746600600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746784200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746807800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746881560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.746896240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.747076000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.747092360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.747111680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:43.747875 containerd[1443]: time="2025-05-15T00:28:43.747127560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747214720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747467360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747582040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747600640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747689520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:28:43.748172 containerd[1443]: time="2025-05-15T00:28:43.747732280Z" level=info msg="metadata content store policy set" policy=shared May 15 00:28:43.751481 containerd[1443]: time="2025-05-15T00:28:43.751447320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:28:43.751600 containerd[1443]: time="2025-05-15T00:28:43.751502720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:28:43.751600 containerd[1443]: time="2025-05-15T00:28:43.751520200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:28:43.751600 containerd[1443]: time="2025-05-15T00:28:43.751535920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:28:43.751600 containerd[1443]: time="2025-05-15T00:28:43.751550480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:28:43.751753 containerd[1443]: time="2025-05-15T00:28:43.751703040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:28:43.751968 containerd[1443]: time="2025-05-15T00:28:43.751946480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:28:43.752094 containerd[1443]: time="2025-05-15T00:28:43.752053560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:28:43.752094 containerd[1443]: time="2025-05-15T00:28:43.752079680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:28:43.752146 containerd[1443]: time="2025-05-15T00:28:43.752097400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:28:43.752146 containerd[1443]: time="2025-05-15T00:28:43.752113080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752146 containerd[1443]: time="2025-05-15T00:28:43.752126360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752146 containerd[1443]: time="2025-05-15T00:28:43.752139000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752211 containerd[1443]: time="2025-05-15T00:28:43.752151800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752211 containerd[1443]: time="2025-05-15T00:28:43.752169200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752211 containerd[1443]: time="2025-05-15T00:28:43.752183040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752211 containerd[1443]: time="2025-05-15T00:28:43.752195440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752211 containerd[1443]: time="2025-05-15T00:28:43.752206760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752243560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752258560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752271160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752283160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752295160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752309080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752322 containerd[1443]: time="2025-05-15T00:28:43.752321840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752336360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752349800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752364840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752382320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752393960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752405280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752422320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:28:43.752452 containerd[1443]: time="2025-05-15T00:28:43.752443760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752588 containerd[1443]: time="2025-05-15T00:28:43.752455200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752588 containerd[1443]: time="2025-05-15T00:28:43.752466400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:28:43.752588 containerd[1443]: time="2025-05-15T00:28:43.752573240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752589880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752601400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752613360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752623440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752635920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:28:43.752663 containerd[1443]: time="2025-05-15T00:28:43.752648680Z" level=info msg="NRI interface is disabled by configuration." May 15 00:28:43.752773 containerd[1443]: time="2025-05-15T00:28:43.752667480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:28:43.753067 containerd[1443]: time="2025-05-15T00:28:43.753005240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:28:43.753067 containerd[1443]: time="2025-05-15T00:28:43.753068000Z" level=info msg="Connect containerd service" May 15 00:28:43.753231 containerd[1443]: time="2025-05-15T00:28:43.753163280Z" level=info msg="using legacy CRI server" May 15 00:28:43.753231 containerd[1443]: time="2025-05-15T00:28:43.753170280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:28:43.753350 containerd[1443]: time="2025-05-15T00:28:43.753284440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:28:43.753983 containerd[1443]: time="2025-05-15T00:28:43.753951520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754383080Z" level=info msg="Start subscribing containerd event" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754428960Z" level=info msg="Start recovering state" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754485720Z" level=info msg="Start event monitor" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754496000Z" level=info msg="Start snapshots syncer" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754504520Z" level=info msg="Start cni network conf syncer for default" May 15 00:28:43.754417 containerd[1443]: time="2025-05-15T00:28:43.754516840Z" level=info msg="Start streaming server" May 15 00:28:43.754892 containerd[1443]: time="2025-05-15T00:28:43.754860520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:28:43.754929 containerd[1443]: time="2025-05-15T00:28:43.754908680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:28:43.757254 containerd[1443]: time="2025-05-15T00:28:43.754960680Z" level=info msg="containerd successfully booted in 0.043193s" May 15 00:28:43.757356 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:28:43.868567 tar[1431]: linux-arm64/LICENSE May 15 00:28:43.868567 tar[1431]: linux-arm64/README.md May 15 00:28:43.881646 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:28:44.266869 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:28:44.284773 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:28:44.298454 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:28:44.303521 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:28:44.305275 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:28:44.307713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:28:44.318733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:28:44.322522 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:28:44.324612 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:28:44.325992 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:28:45.178394 systemd-networkd[1377]: eth0: Gained IPv6LL May 15 00:28:45.181248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:28:45.183008 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:28:45.196594 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:28:45.198971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:45.201318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:28:45.219938 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:28:45.220121 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:28:45.222032 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:28:45.228905 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:28:45.695419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:45.696901 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:28:45.699545 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:28:45.702747 systemd[1]: Startup finished in 544ms (kernel) + 5.058s (initrd) + 3.976s (userspace) = 9.579s. May 15 00:28:46.185295 kubelet[1522]: E0515 00:28:46.185159 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:28:46.187374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:28:46.187543 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:28:48.976850 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:28:48.977967 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). May 15 00:28:49.050269 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.052475 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.063426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:28:49.073617 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:28:49.075808 systemd-logind[1417]: New session 1 of user core. May 15 00:28:49.084403 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:28:49.086596 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:28:49.093467 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:28:49.169575 systemd[1539]: Queued start job for default target default.target. May 15 00:28:49.180156 systemd[1539]: Created slice app.slice - User Application Slice. May 15 00:28:49.180185 systemd[1539]: Reached target paths.target - Paths. May 15 00:28:49.180205 systemd[1539]: Reached target timers.target - Timers. May 15 00:28:49.181463 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:28:49.191086 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:28:49.191152 systemd[1539]: Reached target sockets.target - Sockets. May 15 00:28:49.191165 systemd[1539]: Reached target basic.target - Basic System. May 15 00:28:49.191208 systemd[1539]: Reached target default.target - Main User Target. May 15 00:28:49.191269 systemd[1539]: Startup finished in 88ms. May 15 00:28:49.191663 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:28:49.192908 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:28:49.255634 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:33414.service - OpenSSH per-connection server daemon (10.0.0.1:33414). May 15 00:28:49.299394 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.300697 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.304927 systemd-logind[1417]: New session 2 of user core. May 15 00:28:49.313403 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:28:49.366373 sshd[1550]: pam_unix(sshd:session): session closed for user core May 15 00:28:49.374461 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:33414.service: Deactivated successfully. May 15 00:28:49.375805 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:28:49.377353 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. May 15 00:28:49.378510 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:33416.service - OpenSSH per-connection server daemon (10.0.0.1:33416). May 15 00:28:49.379629 systemd-logind[1417]: Removed session 2. May 15 00:28:49.416334 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 33416 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.417556 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.421078 systemd-logind[1417]: New session 3 of user core. May 15 00:28:49.433373 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:28:49.480359 sshd[1557]: pam_unix(sshd:session): session closed for user core May 15 00:28:49.498559 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:33416.service: Deactivated successfully. May 15 00:28:49.500995 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:28:49.502391 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. May 15 00:28:49.504120 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:33432.service - OpenSSH per-connection server daemon (10.0.0.1:33432). May 15 00:28:49.505288 systemd-logind[1417]: Removed session 3. May 15 00:28:49.542403 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 33432 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.543655 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.546942 systemd-logind[1417]: New session 4 of user core. May 15 00:28:49.557348 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:28:49.607050 sshd[1564]: pam_unix(sshd:session): session closed for user core May 15 00:28:49.620414 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:33432.service: Deactivated successfully. May 15 00:28:49.621959 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:28:49.623239 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. May 15 00:28:49.624392 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:33448.service - OpenSSH per-connection server daemon (10.0.0.1:33448). May 15 00:28:49.626591 systemd-logind[1417]: Removed session 4. May 15 00:28:49.661995 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 33448 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.663117 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.667774 systemd-logind[1417]: New session 5 of user core. May 15 00:28:49.673392 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:28:49.736605 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:28:49.738554 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:49.750954 sudo[1574]: pam_unix(sudo:session): session closed for user root May 15 00:28:49.752498 sshd[1571]: pam_unix(sshd:session): session closed for user core May 15 00:28:49.769508 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:33448.service: Deactivated successfully. May 15 00:28:49.772599 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:28:49.773328 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. May 15 00:28:49.780492 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:33464.service - OpenSSH per-connection server daemon (10.0.0.1:33464). May 15 00:28:49.781493 systemd-logind[1417]: Removed session 5. May 15 00:28:49.816027 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 33464 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.817323 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.820675 systemd-logind[1417]: New session 6 of user core. May 15 00:28:49.831382 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:28:49.880538 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:28:49.880809 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:49.883886 sudo[1583]: pam_unix(sudo:session): session closed for user root May 15 00:28:49.888511 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:28:49.888785 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:49.905475 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 15 00:28:49.906874 auditctl[1586]: No rules May 15 00:28:49.907862 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:28:49.908057 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 15 00:28:49.910505 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:28:49.934626 augenrules[1604]: No rules May 15 00:28:49.936087 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:28:49.937127 sudo[1582]: pam_unix(sudo:session): session closed for user root May 15 00:28:49.939028 sshd[1579]: pam_unix(sshd:session): session closed for user core May 15 00:28:49.948623 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:33464.service: Deactivated successfully. May 15 00:28:49.949979 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:28:49.951208 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. May 15 00:28:49.952366 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:33470.service - OpenSSH per-connection server daemon (10.0.0.1:33470). May 15 00:28:49.953044 systemd-logind[1417]: Removed session 6. May 15 00:28:49.991247 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 33470 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:49.991901 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:49.996025 systemd-logind[1417]: New session 7 of user core. May 15 00:28:50.007390 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:28:50.057287 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:28:50.057556 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:50.360508 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:28:50.360581 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:28:50.618615 dockerd[1633]: time="2025-05-15T00:28:50.618488672Z" level=info msg="Starting up" May 15 00:28:50.774676 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport895546482-merged.mount: Deactivated successfully. May 15 00:28:50.790729 dockerd[1633]: time="2025-05-15T00:28:50.790541162Z" level=info msg="Loading containers: start." May 15 00:28:50.875238 kernel: Initializing XFRM netlink socket May 15 00:28:50.944694 systemd-networkd[1377]: docker0: Link UP May 15 00:28:50.960482 dockerd[1633]: time="2025-05-15T00:28:50.960417053Z" level=info msg="Loading containers: done." May 15 00:28:50.972061 dockerd[1633]: time="2025-05-15T00:28:50.971946969Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:28:50.972061 dockerd[1633]: time="2025-05-15T00:28:50.972042748Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 15 00:28:50.972276 dockerd[1633]: time="2025-05-15T00:28:50.972134649Z" level=info msg="Daemon has completed initialization" May 15 00:28:50.997270 dockerd[1633]: time="2025-05-15T00:28:50.997005906Z" level=info msg="API listen on /run/docker.sock" May 15 00:28:50.997981 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:28:51.550413 containerd[1443]: time="2025-05-15T00:28:51.550370064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 00:28:51.772245 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck329787279-merged.mount: Deactivated successfully. May 15 00:28:52.131753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703972334.mount: Deactivated successfully. May 15 00:28:52.979604 containerd[1443]: time="2025-05-15T00:28:52.979386883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:52.980548 containerd[1443]: time="2025-05-15T00:28:52.980267726Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 15 00:28:52.981305 containerd[1443]: time="2025-05-15T00:28:52.981273402Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:52.984776 containerd[1443]: time="2025-05-15T00:28:52.984740870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:52.985850 containerd[1443]: time="2025-05-15T00:28:52.985611832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.435198223s" May 15 00:28:52.985850 containerd[1443]: time="2025-05-15T00:28:52.985651632Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 00:28:52.986383 containerd[1443]: time="2025-05-15T00:28:52.986355637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 00:28:53.906416 containerd[1443]: time="2025-05-15T00:28:53.906369943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:53.906843 containerd[1443]: time="2025-05-15T00:28:53.906807048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 15 00:28:53.907685 containerd[1443]: time="2025-05-15T00:28:53.907652383Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:53.910600 containerd[1443]: time="2025-05-15T00:28:53.910569677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:53.911765 containerd[1443]: time="2025-05-15T00:28:53.911737605Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 925.275773ms" May 15 00:28:53.911835 containerd[1443]: time="2025-05-15T00:28:53.911769044Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 00:28:53.912362 containerd[1443]: time="2025-05-15T00:28:53.912192619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 00:28:54.861262 containerd[1443]: time="2025-05-15T00:28:54.861006989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:54.861925 containerd[1443]: time="2025-05-15T00:28:54.861669151Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 15 00:28:54.862668 containerd[1443]: time="2025-05-15T00:28:54.862610017Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:54.866122 containerd[1443]: time="2025-05-15T00:28:54.865733050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:54.866824 containerd[1443]: time="2025-05-15T00:28:54.866794398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 954.570206ms" May 15 00:28:54.866881 containerd[1443]: time="2025-05-15T00:28:54.866824430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 00:28:54.867589 containerd[1443]: time="2025-05-15T00:28:54.867306085Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:28:55.750692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385116306.mount: Deactivated successfully. May 15 00:28:56.103630 containerd[1443]: time="2025-05-15T00:28:56.102768491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:56.103630 containerd[1443]: time="2025-05-15T00:28:56.103445554Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 15 00:28:56.104332 containerd[1443]: time="2025-05-15T00:28:56.104284185Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:56.106554 containerd[1443]: time="2025-05-15T00:28:56.106507615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:56.107393 containerd[1443]: time="2025-05-15T00:28:56.107342957Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.240003369s" May 15 00:28:56.107393 containerd[1443]: time="2025-05-15T00:28:56.107389759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 00:28:56.108118 containerd[1443]: time="2025-05-15T00:28:56.108074669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:28:56.437782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:28:56.449440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:56.543846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:56.547868 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:28:56.580099 kubelet[1860]: E0515 00:28:56.580036 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:28:56.583033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:28:56.583164 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:28:56.631693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959241791.mount: Deactivated successfully. May 15 00:28:57.321138 containerd[1443]: time="2025-05-15T00:28:57.319994939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.321138 containerd[1443]: time="2025-05-15T00:28:57.321098036Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 00:28:57.321622 containerd[1443]: time="2025-05-15T00:28:57.321591837Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.324612 containerd[1443]: time="2025-05-15T00:28:57.324572178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.326643 containerd[1443]: time="2025-05-15T00:28:57.326609553Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.218502929s" May 15 00:28:57.326756 containerd[1443]: time="2025-05-15T00:28:57.326738379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:28:57.327285 containerd[1443]: time="2025-05-15T00:28:57.327261897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:28:57.808778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634480051.mount: Deactivated successfully. May 15 00:28:57.814169 containerd[1443]: time="2025-05-15T00:28:57.814118939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.814915 containerd[1443]: time="2025-05-15T00:28:57.814862775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 00:28:57.815533 containerd[1443]: time="2025-05-15T00:28:57.815499844Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.818154 containerd[1443]: time="2025-05-15T00:28:57.817767217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:57.818819 containerd[1443]: time="2025-05-15T00:28:57.818787392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 491.403097ms" May 15 00:28:57.818819 containerd[1443]: time="2025-05-15T00:28:57.818816157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 00:28:57.819333 containerd[1443]: time="2025-05-15T00:28:57.819299404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:28:58.325785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053989286.mount: Deactivated successfully. May 15 00:29:00.293757 containerd[1443]: time="2025-05-15T00:29:00.293705173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:00.295009 containerd[1443]: time="2025-05-15T00:29:00.294956853Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 00:29:00.296109 containerd[1443]: time="2025-05-15T00:29:00.296057239Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:00.300058 containerd[1443]: time="2025-05-15T00:29:00.299987241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:00.301306 containerd[1443]: time="2025-05-15T00:29:00.301261995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.481923916s" May 15 00:29:00.301376 containerd[1443]: time="2025-05-15T00:29:00.301304840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 00:29:03.615020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:03.626468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:29:03.647749 systemd[1]: Reloading requested from client PID 2000 ('systemctl') (unit session-7.scope)... May 15 00:29:03.647767 systemd[1]: Reloading... May 15 00:29:03.707257 zram_generator::config[2042]: No configuration found. May 15 00:29:03.797713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:29:03.854337 systemd[1]: Reloading finished in 206 ms. May 15 00:29:03.898329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:03.900039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:29:03.903426 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:29:03.903795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:03.906995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:29:04.008771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:04.019683 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:29:04.065261 kubelet[2086]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:29:04.065261 kubelet[2086]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:29:04.065261 kubelet[2086]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:29:04.065261 kubelet[2086]: I0515 00:29:04.064949 2086 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:29:05.904867 kubelet[2086]: I0515 00:29:05.904814 2086 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:29:05.904867 kubelet[2086]: I0515 00:29:05.904855 2086 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:29:05.905235 kubelet[2086]: I0515 00:29:05.905097 2086 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:29:05.947572 kubelet[2086]: E0515 00:29:05.947527 2086 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:05.951395 kubelet[2086]: I0515 00:29:05.950927 2086 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:29:05.959536 kubelet[2086]: E0515 00:29:05.959436 2086 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:29:05.959536 kubelet[2086]: I0515 00:29:05.959470 2086 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:29:05.965388 kubelet[2086]: I0515 00:29:05.965353 2086 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:29:05.968242 kubelet[2086]: I0515 00:29:05.968157 2086 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:29:05.968448 kubelet[2086]: I0515 00:29:05.968400 2086 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:29:05.968609 kubelet[2086]: I0515 00:29:05.968439 2086 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:29:05.970538 kubelet[2086]: I0515 00:29:05.970515 2086 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:29:05.970538 kubelet[2086]: I0515 00:29:05.970535 2086 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:29:05.970788 kubelet[2086]: I0515 00:29:05.970765 2086 state_mem.go:36] "Initialized new in-memory state store" May 15 00:29:05.972520 kubelet[2086]: I0515 00:29:05.972494 2086 kubelet.go:408] "Attempting to sync node with API server" May 15 00:29:05.972520 kubelet[2086]: I0515 00:29:05.972521 2086 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:29:05.972583 kubelet[2086]: I0515 00:29:05.972552 2086 kubelet.go:314] "Adding apiserver pod source" May 15 00:29:05.972583 kubelet[2086]: I0515 00:29:05.972563 2086 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:29:05.980902 kubelet[2086]: W0515 00:29:05.980398 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:05.980902 kubelet[2086]: E0515 00:29:05.980461 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:05.980902 kubelet[2086]: I0515 00:29:05.980653 2086 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:29:05.982388 kubelet[2086]: W0515 00:29:05.981078 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:05.982388 kubelet[2086]: E0515 00:29:05.981120 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:05.982632 kubelet[2086]: I0515 00:29:05.982610 2086 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:29:05.983450 kubelet[2086]: W0515 00:29:05.983427 2086 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:29:05.984173 kubelet[2086]: I0515 00:29:05.984096 2086 server.go:1269] "Started kubelet" May 15 00:29:05.985247 kubelet[2086]: I0515 00:29:05.984385 2086 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:29:05.985247 kubelet[2086]: I0515 00:29:05.984560 2086 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:29:05.985247 kubelet[2086]: I0515 00:29:05.984795 2086 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:29:05.986854 kubelet[2086]: I0515 00:29:05.986829 2086 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:29:05.987831 kubelet[2086]: I0515 00:29:05.987423 2086 server.go:460] "Adding debug handlers to kubelet server" May 15 00:29:05.988814 kubelet[2086]: I0515 00:29:05.988782 2086 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:29:05.990452 kubelet[2086]: E0515 00:29:05.988955 2086 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8bd95473b365 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:29:05.984074597 +0000 UTC m=+1.959956499,LastTimestamp:2025-05-15 00:29:05.984074597 +0000 UTC m=+1.959956499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:29:05.990949 kubelet[2086]: I0515 00:29:05.990926 2086 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:29:05.991132 kubelet[2086]: I0515 00:29:05.991107 2086 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:29:05.991255 kubelet[2086]: I0515 00:29:05.991243 2086 reconciler.go:26] "Reconciler: start to sync state" May 15 00:29:05.991722 kubelet[2086]: W0515 00:29:05.991587 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:05.991722 kubelet[2086]: E0515 00:29:05.991643 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:05.991820 kubelet[2086]: E0515 00:29:05.991804 2086 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:29:05.992160 kubelet[2086]: I0515 00:29:05.992138 2086 factory.go:221] Registration of the systemd container factory successfully May 15 00:29:05.992362 kubelet[2086]: I0515 00:29:05.992341 2086 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:29:05.992539 kubelet[2086]: E0515 00:29:05.992155 2086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" May 15 00:29:05.993175 kubelet[2086]: E0515 00:29:05.993141 2086 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:29:05.994398 kubelet[2086]: I0515 00:29:05.994372 2086 factory.go:221] Registration of the containerd container factory successfully May 15 00:29:06.005462 kubelet[2086]: I0515 00:29:06.005327 2086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:29:06.006972 kubelet[2086]: I0515 00:29:06.006921 2086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:29:06.006972 kubelet[2086]: I0515 00:29:06.006944 2086 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:29:06.006972 kubelet[2086]: I0515 00:29:06.006959 2086 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:29:06.007063 kubelet[2086]: E0515 00:29:06.006997 2086 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:29:06.010584 kubelet[2086]: W0515 00:29:06.010535 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:06.010666 kubelet[2086]: E0515 00:29:06.010593 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:06.010746 kubelet[2086]: I0515 00:29:06.010668 2086 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:29:06.010746 kubelet[2086]: I0515 00:29:06.010677 2086 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:29:06.010746 kubelet[2086]: I0515 00:29:06.010694 2086 state_mem.go:36] "Initialized new in-memory state store" May 15 00:29:06.029931 kubelet[2086]: I0515 00:29:06.029901 2086 policy_none.go:49] "None policy: Start" May 15 00:29:06.034869 kubelet[2086]: I0515 00:29:06.034829 2086 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:29:06.034869 kubelet[2086]: I0515 00:29:06.034878 2086 state_mem.go:35] "Initializing new in-memory state store" May 15 00:29:06.051314 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:29:06.065656 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:29:06.069153 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:29:06.080355 kubelet[2086]: I0515 00:29:06.080311 2086 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:29:06.080551 kubelet[2086]: I0515 00:29:06.080524 2086 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:29:06.080605 kubelet[2086]: I0515 00:29:06.080543 2086 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:29:06.081479 kubelet[2086]: I0515 00:29:06.081444 2086 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:29:06.083713 kubelet[2086]: E0515 00:29:06.083694 2086 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:29:06.118091 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 00:29:06.135977 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 00:29:06.150282 systemd[1]: Created slice kubepods-burstable-pod0a27fe2c470203edfe202ed26cfde8e8.slice - libcontainer container kubepods-burstable-pod0a27fe2c470203edfe202ed26cfde8e8.slice. May 15 00:29:06.182990 kubelet[2086]: I0515 00:29:06.182875 2086 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:29:06.183284 kubelet[2086]: E0515 00:29:06.183261 2086 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 15 00:29:06.192988 kubelet[2086]: E0515 00:29:06.192945 2086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" May 15 00:29:06.293268 kubelet[2086]: I0515 00:29:06.293212 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:06.293462 kubelet[2086]: I0515 00:29:06.293444 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:06.293571 kubelet[2086]: I0515 00:29:06.293545 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:29:06.293628 kubelet[2086]: I0515 00:29:06.293593 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:06.293628 kubelet[2086]: I0515 00:29:06.293616 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:06.293676 kubelet[2086]: I0515 00:29:06.293636 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:06.293676 kubelet[2086]: I0515 00:29:06.293656 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:06.293676 kubelet[2086]: I0515 00:29:06.293671 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:06.293744 kubelet[2086]: I0515 00:29:06.293701 2086 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:06.385477 kubelet[2086]: I0515 00:29:06.385448 2086 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:29:06.385773 kubelet[2086]: E0515 00:29:06.385752 2086 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 15 00:29:06.433505 kubelet[2086]: E0515 00:29:06.433399 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.434050 containerd[1443]: time="2025-05-15T00:29:06.434001846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 00:29:06.448423 kubelet[2086]: E0515 00:29:06.448392 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.448858 containerd[1443]: time="2025-05-15T00:29:06.448808758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 00:29:06.452671 kubelet[2086]: E0515 00:29:06.452419 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.452866 containerd[1443]: time="2025-05-15T00:29:06.452821545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a27fe2c470203edfe202ed26cfde8e8,Namespace:kube-system,Attempt:0,}" May 15 00:29:06.594063 kubelet[2086]: E0515 00:29:06.594002 2086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" May 15 00:29:06.787569 kubelet[2086]: I0515 00:29:06.787464 2086 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:29:06.787796 kubelet[2086]: E0515 00:29:06.787772 2086 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 15 00:29:06.922470 kubelet[2086]: W0515 00:29:06.922383 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:06.922470 kubelet[2086]: E0515 00:29:06.922455 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:06.980976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376800839.mount: Deactivated successfully. May 15 00:29:06.989796 containerd[1443]: time="2025-05-15T00:29:06.989746823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:29:06.991376 containerd[1443]: time="2025-05-15T00:29:06.991328116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:29:06.992069 containerd[1443]: time="2025-05-15T00:29:06.992027726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:29:06.993095 containerd[1443]: time="2025-05-15T00:29:06.993044479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:29:06.994091 containerd[1443]: time="2025-05-15T00:29:06.994036095Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:29:06.995074 containerd[1443]: time="2025-05-15T00:29:06.994980705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:29:06.995735 containerd[1443]: time="2025-05-15T00:29:06.995715069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 00:29:06.998361 containerd[1443]: time="2025-05-15T00:29:06.998314832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:29:06.999333 containerd[1443]: time="2025-05-15T00:29:06.999293079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.206321ms" May 15 00:29:07.001462 containerd[1443]: time="2025-05-15T00:29:07.001425633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.529587ms" May 15 00:29:07.004260 containerd[1443]: time="2025-05-15T00:29:07.004208840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.304153ms" May 15 00:29:07.080615 kubelet[2086]: W0515 00:29:07.080447 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:07.080615 kubelet[2086]: E0515 00:29:07.080530 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:07.158689 containerd[1443]: time="2025-05-15T00:29:07.158603428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.158845031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.158882032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.158897519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.158969644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.159046600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.159087991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:07.159198 containerd[1443]: time="2025-05-15T00:29:07.159098728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.159969 containerd[1443]: time="2025-05-15T00:29:07.159162951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.160045 containerd[1443]: time="2025-05-15T00:29:07.160011935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:07.160082 containerd[1443]: time="2025-05-15T00:29:07.160058036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.160191 containerd[1443]: time="2025-05-15T00:29:07.160158821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:07.181445 systemd[1]: Started cri-containerd-17a00302e2a1050beb94d2f6cabd74fae8863f8c54a8fee29325ed16c263ad5b.scope - libcontainer container 17a00302e2a1050beb94d2f6cabd74fae8863f8c54a8fee29325ed16c263ad5b. May 15 00:29:07.182748 systemd[1]: Started cri-containerd-e9fe068ec3be0719a1f1dcd3a7f5ffc287b0873f5c2a10659227bc2a2d3978f1.scope - libcontainer container e9fe068ec3be0719a1f1dcd3a7f5ffc287b0873f5c2a10659227bc2a2d3978f1. May 15 00:29:07.186630 systemd[1]: Started cri-containerd-fa2f0dadaea78b86e24f0e2fcbda066498ed35faa252389f12b76ebe4297ffc1.scope - libcontainer container fa2f0dadaea78b86e24f0e2fcbda066498ed35faa252389f12b76ebe4297ffc1. May 15 00:29:07.226993 containerd[1443]: time="2025-05-15T00:29:07.226578069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a27fe2c470203edfe202ed26cfde8e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9fe068ec3be0719a1f1dcd3a7f5ffc287b0873f5c2a10659227bc2a2d3978f1\"" May 15 00:29:07.228581 kubelet[2086]: E0515 00:29:07.228070 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:07.232755 containerd[1443]: time="2025-05-15T00:29:07.232501040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa2f0dadaea78b86e24f0e2fcbda066498ed35faa252389f12b76ebe4297ffc1\"" May 15 00:29:07.232923 containerd[1443]: time="2025-05-15T00:29:07.232659141Z" level=info msg="CreateContainer within sandbox \"e9fe068ec3be0719a1f1dcd3a7f5ffc287b0873f5c2a10659227bc2a2d3978f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:29:07.233798 kubelet[2086]: E0515 00:29:07.233646 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:07.237327 containerd[1443]: time="2025-05-15T00:29:07.237268282Z" level=info msg="CreateContainer within sandbox \"fa2f0dadaea78b86e24f0e2fcbda066498ed35faa252389f12b76ebe4297ffc1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:29:07.240551 containerd[1443]: time="2025-05-15T00:29:07.240512144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"17a00302e2a1050beb94d2f6cabd74fae8863f8c54a8fee29325ed16c263ad5b\"" May 15 00:29:07.241281 kubelet[2086]: E0515 00:29:07.241259 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:07.243260 containerd[1443]: time="2025-05-15T00:29:07.243202509Z" level=info msg="CreateContainer within sandbox \"17a00302e2a1050beb94d2f6cabd74fae8863f8c54a8fee29325ed16c263ad5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:29:07.255337 containerd[1443]: time="2025-05-15T00:29:07.255202162Z" level=info msg="CreateContainer within sandbox \"fa2f0dadaea78b86e24f0e2fcbda066498ed35faa252389f12b76ebe4297ffc1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57d092258f75056cd685033d8bb6101a8c49ed6a9f57678796d0dfe3d9dcdcfa\"" May 15 00:29:07.256104 containerd[1443]: time="2025-05-15T00:29:07.255871889Z" level=info msg="CreateContainer within sandbox \"e9fe068ec3be0719a1f1dcd3a7f5ffc287b0873f5c2a10659227bc2a2d3978f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bc0e40c4334ec532194e9d8bddca7d869eeb2f7365b3ac37a8c54d53ee87aa3\"" May 15 00:29:07.256509 containerd[1443]: time="2025-05-15T00:29:07.256479988Z" level=info msg="StartContainer for \"8bc0e40c4334ec532194e9d8bddca7d869eeb2f7365b3ac37a8c54d53ee87aa3\"" May 15 00:29:07.257322 containerd[1443]: time="2025-05-15T00:29:07.256499546Z" level=info msg="StartContainer for \"57d092258f75056cd685033d8bb6101a8c49ed6a9f57678796d0dfe3d9dcdcfa\"" May 15 00:29:07.260671 containerd[1443]: time="2025-05-15T00:29:07.260580457Z" level=info msg="CreateContainer within sandbox \"17a00302e2a1050beb94d2f6cabd74fae8863f8c54a8fee29325ed16c263ad5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ecab5fa453810f267dc0bcf49ab4f11d729591cd48369ff094538e2497a530c\"" May 15 00:29:07.261090 containerd[1443]: time="2025-05-15T00:29:07.261034326Z" level=info msg="StartContainer for \"6ecab5fa453810f267dc0bcf49ab4f11d729591cd48369ff094538e2497a530c\"" May 15 00:29:07.265120 kubelet[2086]: W0515 00:29:07.264999 2086 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused May 15 00:29:07.265120 kubelet[2086]: E0515 00:29:07.265078 2086 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" May 15 00:29:07.282405 systemd[1]: Started cri-containerd-8bc0e40c4334ec532194e9d8bddca7d869eeb2f7365b3ac37a8c54d53ee87aa3.scope - libcontainer container 8bc0e40c4334ec532194e9d8bddca7d869eeb2f7365b3ac37a8c54d53ee87aa3. May 15 00:29:07.286206 systemd[1]: Started cri-containerd-57d092258f75056cd685033d8bb6101a8c49ed6a9f57678796d0dfe3d9dcdcfa.scope - libcontainer container 57d092258f75056cd685033d8bb6101a8c49ed6a9f57678796d0dfe3d9dcdcfa. May 15 00:29:07.298424 systemd[1]: Started cri-containerd-6ecab5fa453810f267dc0bcf49ab4f11d729591cd48369ff094538e2497a530c.scope - libcontainer container 6ecab5fa453810f267dc0bcf49ab4f11d729591cd48369ff094538e2497a530c. May 15 00:29:07.340302 containerd[1443]: time="2025-05-15T00:29:07.340132574Z" level=info msg="StartContainer for \"57d092258f75056cd685033d8bb6101a8c49ed6a9f57678796d0dfe3d9dcdcfa\" returns successfully" May 15 00:29:07.340302 containerd[1443]: time="2025-05-15T00:29:07.340116968Z" level=info msg="StartContainer for \"8bc0e40c4334ec532194e9d8bddca7d869eeb2f7365b3ac37a8c54d53ee87aa3\" returns successfully" May 15 00:29:07.367120 containerd[1443]: time="2025-05-15T00:29:07.366995833Z" level=info msg="StartContainer for \"6ecab5fa453810f267dc0bcf49ab4f11d729591cd48369ff094538e2497a530c\" returns successfully" May 15 00:29:07.394978 kubelet[2086]: E0515 00:29:07.394512 2086 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" May 15 00:29:07.589486 kubelet[2086]: I0515 00:29:07.589452 2086 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:29:08.024455 kubelet[2086]: E0515 00:29:08.024416 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:08.026241 kubelet[2086]: E0515 00:29:08.025878 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:08.027401 kubelet[2086]: E0515 00:29:08.027381 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:09.030870 kubelet[2086]: E0515 00:29:09.030740 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:09.030870 kubelet[2086]: E0515 00:29:09.030814 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:09.270431 kubelet[2086]: E0515 00:29:09.270379 2086 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:29:09.375414 kubelet[2086]: I0515 00:29:09.375305 2086 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:29:09.375414 kubelet[2086]: E0515 00:29:09.375351 2086 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:29:09.386592 kubelet[2086]: E0515 00:29:09.386558 2086 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:29:09.974880 kubelet[2086]: I0515 00:29:09.974820 2086 apiserver.go:52] "Watching apiserver" May 15 00:29:09.991723 kubelet[2086]: I0515 00:29:09.991656 2086 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:29:10.709879 kubelet[2086]: E0515 00:29:10.709829 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:11.032396 kubelet[2086]: E0515 00:29:11.032289 2086 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:12.058215 systemd[1]: Reloading requested from client PID 2362 ('systemctl') (unit session-7.scope)... May 15 00:29:12.058260 systemd[1]: Reloading... May 15 00:29:12.131259 zram_generator::config[2404]: No configuration found. May 15 00:29:12.213701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:29:12.316934 systemd[1]: Reloading finished in 258 ms. May 15 00:29:12.357950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:29:12.376519 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:29:12.376715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:12.376760 systemd[1]: kubelet.service: Consumed 2.045s CPU time, 116.2M memory peak, 0B memory swap peak. May 15 00:29:12.388531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:29:12.486975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:29:12.492205 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:29:12.556907 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:29:12.556907 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:29:12.556907 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:29:12.557294 kubelet[2443]: I0515 00:29:12.556958 2443 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:29:12.563346 kubelet[2443]: I0515 00:29:12.563307 2443 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:29:12.563434 kubelet[2443]: I0515 00:29:12.563356 2443 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:29:12.564343 kubelet[2443]: I0515 00:29:12.563720 2443 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:29:12.565850 kubelet[2443]: I0515 00:29:12.565821 2443 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:29:12.568887 kubelet[2443]: I0515 00:29:12.568641 2443 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:29:12.571757 kubelet[2443]: E0515 00:29:12.571700 2443 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:29:12.571828 kubelet[2443]: I0515 00:29:12.571760 2443 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:29:12.574048 kubelet[2443]: I0515 00:29:12.574029 2443 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:29:12.574165 kubelet[2443]: I0515 00:29:12.574152 2443 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:29:12.574303 kubelet[2443]: I0515 00:29:12.574275 2443 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:29:12.574470 kubelet[2443]: I0515 00:29:12.574303 2443 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:29:12.574550 kubelet[2443]: I0515 00:29:12.574477 2443 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:29:12.574550 kubelet[2443]: I0515 00:29:12.574486 2443 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:29:12.574550 kubelet[2443]: I0515 00:29:12.574516 2443 state_mem.go:36] "Initialized new in-memory state store" May 15 00:29:12.574631 kubelet[2443]: I0515 00:29:12.574618 2443 kubelet.go:408] "Attempting to sync node with API server" May 15 00:29:12.574631 kubelet[2443]: I0515 00:29:12.574632 2443 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:29:12.574685 kubelet[2443]: I0515 00:29:12.574653 2443 kubelet.go:314] "Adding apiserver pod source" May 15 00:29:12.574685 kubelet[2443]: I0515 00:29:12.574661 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:29:12.575364 kubelet[2443]: I0515 00:29:12.575241 2443 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:29:12.575758 kubelet[2443]: I0515 00:29:12.575736 2443 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:29:12.577298 kubelet[2443]: I0515 00:29:12.576126 2443 server.go:1269] "Started kubelet" May 15 00:29:12.577298 kubelet[2443]: I0515 00:29:12.576422 2443 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:29:12.577298 kubelet[2443]: I0515 00:29:12.576462 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:29:12.577298 kubelet[2443]: I0515 00:29:12.576692 2443 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:29:12.578185 kubelet[2443]: I0515 00:29:12.577977 2443 server.go:460] "Adding debug handlers to kubelet server" May 15 00:29:12.579979 kubelet[2443]: I0515 00:29:12.579943 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:29:12.585368 kubelet[2443]: I0515 00:29:12.585336 2443 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:29:12.586504 kubelet[2443]: I0515 00:29:12.586307 2443 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:29:12.586840 kubelet[2443]: E0515 00:29:12.586812 2443 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:29:12.587229 kubelet[2443]: I0515 00:29:12.586989 2443 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:29:12.587362 kubelet[2443]: I0515 00:29:12.587179 2443 reconciler.go:26] "Reconciler: start to sync state" May 15 00:29:12.595047 kubelet[2443]: I0515 00:29:12.595009 2443 factory.go:221] Registration of the systemd container factory successfully May 15 00:29:12.599957 kubelet[2443]: I0515 00:29:12.599913 2443 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:29:12.601024 kubelet[2443]: I0515 00:29:12.600346 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:29:12.602061 kubelet[2443]: I0515 00:29:12.602033 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:29:12.602061 kubelet[2443]: I0515 00:29:12.602059 2443 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:29:12.602530 kubelet[2443]: I0515 00:29:12.602073 2443 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:29:12.602530 kubelet[2443]: E0515 00:29:12.602109 2443 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:29:12.604253 kubelet[2443]: I0515 00:29:12.604216 2443 factory.go:221] Registration of the containerd container factory successfully May 15 00:29:12.608617 kubelet[2443]: E0515 00:29:12.608118 2443 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:29:12.642726 kubelet[2443]: I0515 00:29:12.642696 2443 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:29:12.642726 kubelet[2443]: I0515 00:29:12.642717 2443 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:29:12.642726 kubelet[2443]: I0515 00:29:12.642738 2443 state_mem.go:36] "Initialized new in-memory state store" May 15 00:29:12.642906 kubelet[2443]: I0515 00:29:12.642892 2443 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:29:12.642931 kubelet[2443]: I0515 00:29:12.642902 2443 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:29:12.642931 kubelet[2443]: I0515 00:29:12.642919 2443 policy_none.go:49] "None policy: Start" May 15 00:29:12.643590 kubelet[2443]: I0515 00:29:12.643532 2443 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:29:12.643590 kubelet[2443]: I0515 00:29:12.643554 2443 state_mem.go:35] "Initializing new in-memory state store" May 15 00:29:12.643730 kubelet[2443]: I0515 00:29:12.643716 2443 state_mem.go:75] "Updated machine memory state" May 15 00:29:12.647679 kubelet[2443]: I0515 00:29:12.647532 2443 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:29:12.647757 kubelet[2443]: I0515 00:29:12.647704 2443 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:29:12.647757 kubelet[2443]: I0515 00:29:12.647716 2443 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:29:12.648258 kubelet[2443]: I0515 00:29:12.647908 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:29:12.709258 kubelet[2443]: E0515 00:29:12.709206 2443 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:29:12.751850 kubelet[2443]: I0515 00:29:12.751825 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:29:12.758541 kubelet[2443]: I0515 00:29:12.758484 2443 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 00:29:12.758852 kubelet[2443]: I0515 00:29:12.758577 2443 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:29:12.788643 kubelet[2443]: I0515 00:29:12.788584 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:12.788643 kubelet[2443]: I0515 00:29:12.788626 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:12.789138 kubelet[2443]: I0515 00:29:12.788658 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:12.789138 kubelet[2443]: I0515 00:29:12.788677 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:12.789138 kubelet[2443]: I0515 00:29:12.788695 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:12.789138 kubelet[2443]: I0515 00:29:12.788711 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:29:12.789138 kubelet[2443]: I0515 00:29:12.788724 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:12.789296 kubelet[2443]: I0515 00:29:12.788739 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a27fe2c470203edfe202ed26cfde8e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a27fe2c470203edfe202ed26cfde8e8\") " pod="kube-system/kube-apiserver-localhost" May 15 00:29:12.789296 kubelet[2443]: I0515 00:29:12.788754 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:29:13.009764 kubelet[2443]: E0515 00:29:13.009332 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.009764 kubelet[2443]: E0515 00:29:13.009441 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.009764 kubelet[2443]: E0515 00:29:13.009447 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.108966 sudo[2480]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:29:13.109312 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:29:13.530643 sudo[2480]: pam_unix(sudo:session): session closed for user root May 15 00:29:13.577298 kubelet[2443]: I0515 00:29:13.575259 2443 apiserver.go:52] "Watching apiserver" May 15 00:29:13.588537 kubelet[2443]: I0515 00:29:13.588458 2443 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:29:13.624162 kubelet[2443]: E0515 00:29:13.621896 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.624162 kubelet[2443]: E0515 00:29:13.622374 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.628258 kubelet[2443]: E0515 00:29:13.628213 2443 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:29:13.628449 kubelet[2443]: E0515 00:29:13.628423 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:13.665265 kubelet[2443]: I0515 00:29:13.665185 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.665168164 podStartE2EDuration="3.665168164s" podCreationTimestamp="2025-05-15 00:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:13.664410851 +0000 UTC m=+1.169317877" watchObservedRunningTime="2025-05-15 00:29:13.665168164 +0000 UTC m=+1.170075190" May 15 00:29:13.685239 kubelet[2443]: I0515 00:29:13.685147 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6850478629999999 podStartE2EDuration="1.685047863s" podCreationTimestamp="2025-05-15 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:13.672153267 +0000 UTC m=+1.177060293" watchObservedRunningTime="2025-05-15 00:29:13.685047863 +0000 UTC m=+1.189954889" May 15 00:29:13.696208 kubelet[2443]: I0515 00:29:13.694974 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6949586 podStartE2EDuration="1.6949586s" podCreationTimestamp="2025-05-15 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:13.687576239 +0000 UTC m=+1.192483264" watchObservedRunningTime="2025-05-15 00:29:13.6949586 +0000 UTC m=+1.199865626" May 15 00:29:14.623944 kubelet[2443]: E0515 00:29:14.623590 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:15.578136 sudo[1615]: pam_unix(sudo:session): session closed for user root May 15 00:29:15.580182 sshd[1612]: pam_unix(sshd:session): session closed for user core May 15 00:29:15.584530 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:33470.service: Deactivated successfully. May 15 00:29:15.586526 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:29:15.587338 systemd[1]: session-7.scope: Consumed 6.018s CPU time, 151.0M memory peak, 0B memory swap peak. May 15 00:29:15.587963 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. May 15 00:29:15.588898 systemd-logind[1417]: Removed session 7. May 15 00:29:16.488170 kubelet[2443]: E0515 00:29:16.488119 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:16.802123 kubelet[2443]: E0515 00:29:16.801593 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:18.208074 kubelet[2443]: I0515 00:29:18.208026 2443 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:29:18.208452 containerd[1443]: time="2025-05-15T00:29:18.208418118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:29:18.209635 kubelet[2443]: I0515 00:29:18.209603 2443 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:29:18.944069 systemd[1]: Created slice kubepods-besteffort-pod19ba2706_332b_4a36_8902_bb00f6612c3e.slice - libcontainer container kubepods-besteffort-pod19ba2706_332b_4a36_8902_bb00f6612c3e.slice. May 15 00:29:18.957745 systemd[1]: Created slice kubepods-burstable-pod40d047a8_afe1_43b8_8318_19e53eabb68f.slice - libcontainer container kubepods-burstable-pod40d047a8_afe1_43b8_8318_19e53eabb68f.slice. May 15 00:29:19.028738 kubelet[2443]: I0515 00:29:19.028669 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-bpf-maps\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028738 kubelet[2443]: I0515 00:29:19.028729 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-net\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028738 kubelet[2443]: I0515 00:29:19.028750 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-kernel\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028771 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ba2706-332b-4a36-8902-bb00f6612c3e-xtables-lock\") pod \"kube-proxy-kdglm\" (UID: \"19ba2706-332b-4a36-8902-bb00f6612c3e\") " pod="kube-system/kube-proxy-kdglm" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028787 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-run\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028803 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-hostproc\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028821 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-config-path\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028837 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19ba2706-332b-4a36-8902-bb00f6612c3e-kube-proxy\") pod \"kube-proxy-kdglm\" (UID: \"19ba2706-332b-4a36-8902-bb00f6612c3e\") " pod="kube-system/kube-proxy-kdglm" May 15 00:29:19.028926 kubelet[2443]: I0515 00:29:19.028852 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cni-path\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028866 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-etc-cni-netd\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028880 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-hubble-tls\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028894 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-cgroup\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028908 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-lib-modules\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028925 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ba2706-332b-4a36-8902-bb00f6612c3e-lib-modules\") pod \"kube-proxy-kdglm\" (UID: \"19ba2706-332b-4a36-8902-bb00f6612c3e\") " pod="kube-system/kube-proxy-kdglm" May 15 00:29:19.029079 kubelet[2443]: I0515 00:29:19.028945 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdb5b\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029289 kubelet[2443]: I0515 00:29:19.028964 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7fm\" (UniqueName: \"kubernetes.io/projected/19ba2706-332b-4a36-8902-bb00f6612c3e-kube-api-access-4t7fm\") pod \"kube-proxy-kdglm\" (UID: \"19ba2706-332b-4a36-8902-bb00f6612c3e\") " pod="kube-system/kube-proxy-kdglm" May 15 00:29:19.029289 kubelet[2443]: I0515 00:29:19.028982 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-xtables-lock\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.029289 kubelet[2443]: I0515 00:29:19.028998 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d047a8-afe1-43b8-8318-19e53eabb68f-clustermesh-secrets\") pod \"cilium-j5kdt\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " pod="kube-system/cilium-j5kdt" May 15 00:29:19.140690 kubelet[2443]: E0515 00:29:19.140208 2443 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:29:19.142046 kubelet[2443]: E0515 00:29:19.141259 2443 projected.go:194] Error preparing data for projected volume kube-api-access-xdb5b for pod kube-system/cilium-j5kdt: configmap "kube-root-ca.crt" not found May 15 00:29:19.142046 kubelet[2443]: E0515 00:29:19.141321 2443 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b podName:40d047a8-afe1-43b8-8318-19e53eabb68f nodeName:}" failed. No retries permitted until 2025-05-15 00:29:19.641301567 +0000 UTC m=+7.146208593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xdb5b" (UniqueName: "kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b") pod "cilium-j5kdt" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f") : configmap "kube-root-ca.crt" not found May 15 00:29:19.142046 kubelet[2443]: E0515 00:29:19.141072 2443 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:29:19.142046 kubelet[2443]: E0515 00:29:19.141618 2443 projected.go:194] Error preparing data for projected volume kube-api-access-4t7fm for pod kube-system/kube-proxy-kdglm: configmap "kube-root-ca.crt" not found May 15 00:29:19.142390 kubelet[2443]: E0515 00:29:19.142307 2443 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19ba2706-332b-4a36-8902-bb00f6612c3e-kube-api-access-4t7fm podName:19ba2706-332b-4a36-8902-bb00f6612c3e nodeName:}" failed. No retries permitted until 2025-05-15 00:29:19.642289096 +0000 UTC m=+7.147196122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4t7fm" (UniqueName: "kubernetes.io/projected/19ba2706-332b-4a36-8902-bb00f6612c3e-kube-api-access-4t7fm") pod "kube-proxy-kdglm" (UID: "19ba2706-332b-4a36-8902-bb00f6612c3e") : configmap "kube-root-ca.crt" not found May 15 00:29:19.281249 systemd[1]: Created slice kubepods-besteffort-podd52a7fbe_24ee_4975_8b96_a0f60f0788bf.slice - libcontainer container kubepods-besteffort-podd52a7fbe_24ee_4975_8b96_a0f60f0788bf.slice. May 15 00:29:19.331584 kubelet[2443]: I0515 00:29:19.331493 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km6zw\" (UniqueName: \"kubernetes.io/projected/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-kube-api-access-km6zw\") pod \"cilium-operator-5d85765b45-jfr78\" (UID: \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\") " pod="kube-system/cilium-operator-5d85765b45-jfr78" May 15 00:29:19.331584 kubelet[2443]: I0515 00:29:19.331542 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-cilium-config-path\") pod \"cilium-operator-5d85765b45-jfr78\" (UID: \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\") " pod="kube-system/cilium-operator-5d85765b45-jfr78" May 15 00:29:19.585394 kubelet[2443]: E0515 00:29:19.585266 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.586087 containerd[1443]: time="2025-05-15T00:29:19.586045749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jfr78,Uid:d52a7fbe-24ee-4975-8b96-a0f60f0788bf,Namespace:kube-system,Attempt:0,}" May 15 00:29:19.607140 containerd[1443]: time="2025-05-15T00:29:19.607022954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:19.607140 containerd[1443]: time="2025-05-15T00:29:19.607087330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:19.607140 containerd[1443]: time="2025-05-15T00:29:19.607103695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.607438 containerd[1443]: time="2025-05-15T00:29:19.607189356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.626443 systemd[1]: Started cri-containerd-4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6.scope - libcontainer container 4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6. May 15 00:29:19.657288 containerd[1443]: time="2025-05-15T00:29:19.657238847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jfr78,Uid:d52a7fbe-24ee-4975-8b96-a0f60f0788bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\"" May 15 00:29:19.657936 kubelet[2443]: E0515 00:29:19.657910 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.660149 containerd[1443]: time="2025-05-15T00:29:19.659860348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:29:19.853400 kubelet[2443]: E0515 00:29:19.853292 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.854212 containerd[1443]: time="2025-05-15T00:29:19.854011868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdglm,Uid:19ba2706-332b-4a36-8902-bb00f6612c3e,Namespace:kube-system,Attempt:0,}" May 15 00:29:19.860617 kubelet[2443]: E0515 00:29:19.860588 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.861234 containerd[1443]: time="2025-05-15T00:29:19.860998668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j5kdt,Uid:40d047a8-afe1-43b8-8318-19e53eabb68f,Namespace:kube-system,Attempt:0,}" May 15 00:29:19.874876 containerd[1443]: time="2025-05-15T00:29:19.874533158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:19.874876 containerd[1443]: time="2025-05-15T00:29:19.874605617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:19.874876 containerd[1443]: time="2025-05-15T00:29:19.874617420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.874876 containerd[1443]: time="2025-05-15T00:29:19.874699160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.883280 containerd[1443]: time="2025-05-15T00:29:19.882961242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:19.883280 containerd[1443]: time="2025-05-15T00:29:19.883040462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:19.883280 containerd[1443]: time="2025-05-15T00:29:19.883051465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.883679 containerd[1443]: time="2025-05-15T00:29:19.883563994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.895417 systemd[1]: Started cri-containerd-03f591b4d52280644113ba7f7fb2b9d5ec761f2f8519d402716f9adc6aa6eec3.scope - libcontainer container 03f591b4d52280644113ba7f7fb2b9d5ec761f2f8519d402716f9adc6aa6eec3. May 15 00:29:19.901834 systemd[1]: Started cri-containerd-ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116.scope - libcontainer container ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116. May 15 00:29:19.920178 containerd[1443]: time="2025-05-15T00:29:19.919944881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdglm,Uid:19ba2706-332b-4a36-8902-bb00f6612c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"03f591b4d52280644113ba7f7fb2b9d5ec761f2f8519d402716f9adc6aa6eec3\"" May 15 00:29:19.921446 kubelet[2443]: E0515 00:29:19.921352 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.923859 containerd[1443]: time="2025-05-15T00:29:19.923731275Z" level=info msg="CreateContainer within sandbox \"03f591b4d52280644113ba7f7fb2b9d5ec761f2f8519d402716f9adc6aa6eec3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:29:19.931185 containerd[1443]: time="2025-05-15T00:29:19.931137861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j5kdt,Uid:40d047a8-afe1-43b8-8318-19e53eabb68f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\"" May 15 00:29:19.932556 kubelet[2443]: E0515 00:29:19.932522 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.946958 containerd[1443]: time="2025-05-15T00:29:19.946819292Z" level=info msg="CreateContainer within sandbox \"03f591b4d52280644113ba7f7fb2b9d5ec761f2f8519d402716f9adc6aa6eec3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c1406834ab6920aeca1d81e7751ea861702c15e852fa02d50c6e3cf6fa40d48\"" May 15 00:29:19.947768 containerd[1443]: time="2025-05-15T00:29:19.947736443Z" level=info msg="StartContainer for \"2c1406834ab6920aeca1d81e7751ea861702c15e852fa02d50c6e3cf6fa40d48\"" May 15 00:29:19.982430 systemd[1]: Started cri-containerd-2c1406834ab6920aeca1d81e7751ea861702c15e852fa02d50c6e3cf6fa40d48.scope - libcontainer container 2c1406834ab6920aeca1d81e7751ea861702c15e852fa02d50c6e3cf6fa40d48. May 15 00:29:20.009808 containerd[1443]: time="2025-05-15T00:29:20.008295406Z" level=info msg="StartContainer for \"2c1406834ab6920aeca1d81e7751ea861702c15e852fa02d50c6e3cf6fa40d48\" returns successfully" May 15 00:29:20.026285 kubelet[2443]: E0515 00:29:20.026247 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:20.623248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155804417.mount: Deactivated successfully. May 15 00:29:20.636478 kubelet[2443]: E0515 00:29:20.636339 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:20.637258 kubelet[2443]: E0515 00:29:20.637239 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:20.654792 kubelet[2443]: I0515 00:29:20.654720 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdglm" podStartSLOduration=2.654701843 podStartE2EDuration="2.654701843s" podCreationTimestamp="2025-05-15 00:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:20.646977642 +0000 UTC m=+8.151884668" watchObservedRunningTime="2025-05-15 00:29:20.654701843 +0000 UTC m=+8.159608829" May 15 00:29:21.144753 containerd[1443]: time="2025-05-15T00:29:21.144707171Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:21.145677 containerd[1443]: time="2025-05-15T00:29:21.145609295Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 00:29:21.146209 containerd[1443]: time="2025-05-15T00:29:21.146182504Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:21.152159 containerd[1443]: time="2025-05-15T00:29:21.152020260Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.492102458s" May 15 00:29:21.152159 containerd[1443]: time="2025-05-15T00:29:21.152065710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 00:29:21.154800 containerd[1443]: time="2025-05-15T00:29:21.154767079Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:29:21.155683 containerd[1443]: time="2025-05-15T00:29:21.155516368Z" level=info msg="CreateContainer within sandbox \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:29:21.165047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126549719.mount: Deactivated successfully. May 15 00:29:21.171377 containerd[1443]: time="2025-05-15T00:29:21.171337016Z" level=info msg="CreateContainer within sandbox \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\"" May 15 00:29:21.172759 containerd[1443]: time="2025-05-15T00:29:21.172728209Z" level=info msg="StartContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\"" May 15 00:29:21.196432 systemd[1]: Started cri-containerd-82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f.scope - libcontainer container 82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f. May 15 00:29:21.223308 containerd[1443]: time="2025-05-15T00:29:21.223253842Z" level=info msg="StartContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" returns successfully" May 15 00:29:21.641666 kubelet[2443]: E0515 00:29:21.641539 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:21.654755 kubelet[2443]: I0515 00:29:21.654395 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jfr78" podStartSLOduration=1.160538981 podStartE2EDuration="2.654379053s" podCreationTimestamp="2025-05-15 00:29:19 +0000 UTC" firstStartedPulling="2025-05-15 00:29:19.659054425 +0000 UTC m=+7.163961411" lastFinishedPulling="2025-05-15 00:29:21.152894457 +0000 UTC m=+8.657801483" observedRunningTime="2025-05-15 00:29:21.653511697 +0000 UTC m=+9.158418723" watchObservedRunningTime="2025-05-15 00:29:21.654379053 +0000 UTC m=+9.159286079" May 15 00:29:22.644650 kubelet[2443]: E0515 00:29:22.644571 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:26.538070 kubelet[2443]: E0515 00:29:26.538028 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:26.835631 kubelet[2443]: E0515 00:29:26.835201 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:27.762042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854157289.mount: Deactivated successfully. May 15 00:29:29.126479 update_engine[1422]: I20250515 00:29:29.126399 1422 update_attempter.cc:509] Updating boot flags... May 15 00:29:29.170960 containerd[1443]: time="2025-05-15T00:29:29.170577987Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:29.171652 containerd[1443]: time="2025-05-15T00:29:29.171418752Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 00:29:29.173451 containerd[1443]: time="2025-05-15T00:29:29.173416567Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:29.177269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2906) May 15 00:29:29.180661 containerd[1443]: time="2025-05-15T00:29:29.180580427Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.02563739s" May 15 00:29:29.180995 containerd[1443]: time="2025-05-15T00:29:29.180770855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 00:29:29.196182 containerd[1443]: time="2025-05-15T00:29:29.195350412Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:29:29.218712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2906) May 15 00:29:29.281172 containerd[1443]: time="2025-05-15T00:29:29.281121022Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\"" May 15 00:29:29.282009 containerd[1443]: time="2025-05-15T00:29:29.281971788Z" level=info msg="StartContainer for \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\"" May 15 00:29:29.311416 systemd[1]: Started cri-containerd-88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07.scope - libcontainer container 88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07. May 15 00:29:29.331994 containerd[1443]: time="2025-05-15T00:29:29.330918509Z" level=info msg="StartContainer for \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\" returns successfully" May 15 00:29:29.386962 systemd[1]: cri-containerd-88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07.scope: Deactivated successfully. May 15 00:29:29.517671 containerd[1443]: time="2025-05-15T00:29:29.517441624Z" level=info msg="shim disconnected" id=88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07 namespace=k8s.io May 15 00:29:29.517671 containerd[1443]: time="2025-05-15T00:29:29.517504554Z" level=warning msg="cleaning up after shim disconnected" id=88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07 namespace=k8s.io May 15 00:29:29.517671 containerd[1443]: time="2025-05-15T00:29:29.517513195Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:29:29.662053 kubelet[2443]: E0515 00:29:29.661872 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:29.663826 containerd[1443]: time="2025-05-15T00:29:29.663790676Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:29:29.716351 containerd[1443]: time="2025-05-15T00:29:29.716294804Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\"" May 15 00:29:29.716822 containerd[1443]: time="2025-05-15T00:29:29.716775195Z" level=info msg="StartContainer for \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\"" May 15 00:29:29.741417 systemd[1]: Started cri-containerd-53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96.scope - libcontainer container 53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96. May 15 00:29:29.765708 containerd[1443]: time="2025-05-15T00:29:29.765601819Z" level=info msg="StartContainer for \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\" returns successfully" May 15 00:29:29.793268 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:29:29.793493 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:29:29.793563 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:29:29.798542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:29:29.798734 systemd[1]: cri-containerd-53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96.scope: Deactivated successfully. May 15 00:29:29.819281 containerd[1443]: time="2025-05-15T00:29:29.819168344Z" level=info msg="shim disconnected" id=53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96 namespace=k8s.io May 15 00:29:29.819281 containerd[1443]: time="2025-05-15T00:29:29.819266798Z" level=warning msg="cleaning up after shim disconnected" id=53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96 namespace=k8s.io May 15 00:29:29.819281 containerd[1443]: time="2025-05-15T00:29:29.819277240Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:29:29.821972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:29:30.258598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07-rootfs.mount: Deactivated successfully. May 15 00:29:30.664762 kubelet[2443]: E0515 00:29:30.664706 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:30.667420 containerd[1443]: time="2025-05-15T00:29:30.667365031Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:29:30.684579 containerd[1443]: time="2025-05-15T00:29:30.684469119Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\"" May 15 00:29:30.686056 containerd[1443]: time="2025-05-15T00:29:30.686027218Z" level=info msg="StartContainer for \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\"" May 15 00:29:30.724404 systemd[1]: Started cri-containerd-d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882.scope - libcontainer container d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882. May 15 00:29:30.753163 containerd[1443]: time="2025-05-15T00:29:30.753112702Z" level=info msg="StartContainer for \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\" returns successfully" May 15 00:29:30.764531 systemd[1]: cri-containerd-d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882.scope: Deactivated successfully. May 15 00:29:30.785419 containerd[1443]: time="2025-05-15T00:29:30.785215462Z" level=info msg="shim disconnected" id=d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882 namespace=k8s.io May 15 00:29:30.785419 containerd[1443]: time="2025-05-15T00:29:30.785296113Z" level=warning msg="cleaning up after shim disconnected" id=d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882 namespace=k8s.io May 15 00:29:30.785419 containerd[1443]: time="2025-05-15T00:29:30.785305154Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:29:31.258581 systemd[1]: run-containerd-runc-k8s.io-d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882-runc.TM1gd5.mount: Deactivated successfully. May 15 00:29:31.258676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882-rootfs.mount: Deactivated successfully. May 15 00:29:31.669581 kubelet[2443]: E0515 00:29:31.668927 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:31.670777 containerd[1443]: time="2025-05-15T00:29:31.670740467Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:29:31.689939 containerd[1443]: time="2025-05-15T00:29:31.689891194Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\"" May 15 00:29:31.690782 containerd[1443]: time="2025-05-15T00:29:31.690755150Z" level=info msg="StartContainer for \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\"" May 15 00:29:31.718426 systemd[1]: Started cri-containerd-833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef.scope - libcontainer container 833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef. May 15 00:29:31.745068 systemd[1]: cri-containerd-833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef.scope: Deactivated successfully. May 15 00:29:31.747816 containerd[1443]: time="2025-05-15T00:29:31.747771353Z" level=info msg="StartContainer for \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\" returns successfully" May 15 00:29:31.772552 containerd[1443]: time="2025-05-15T00:29:31.772490227Z" level=info msg="shim disconnected" id=833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef namespace=k8s.io May 15 00:29:31.772552 containerd[1443]: time="2025-05-15T00:29:31.772543634Z" level=warning msg="cleaning up after shim disconnected" id=833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef namespace=k8s.io May 15 00:29:31.772552 containerd[1443]: time="2025-05-15T00:29:31.772552275Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:29:32.258673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef-rootfs.mount: Deactivated successfully. May 15 00:29:32.672967 kubelet[2443]: E0515 00:29:32.672926 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:32.675354 containerd[1443]: time="2025-05-15T00:29:32.675218428Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:29:32.695545 containerd[1443]: time="2025-05-15T00:29:32.695493698Z" level=info msg="CreateContainer within sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\"" May 15 00:29:32.696214 containerd[1443]: time="2025-05-15T00:29:32.696146261Z" level=info msg="StartContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\"" May 15 00:29:32.727398 systemd[1]: Started cri-containerd-b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a.scope - libcontainer container b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a. May 15 00:29:32.750706 containerd[1443]: time="2025-05-15T00:29:32.750650904Z" level=info msg="StartContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" returns successfully" May 15 00:29:32.931771 kubelet[2443]: I0515 00:29:32.931461 2443 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:29:32.966510 systemd[1]: Created slice kubepods-burstable-pod728139fe_cc46_4984_ac72_9ce1dec0cacc.slice - libcontainer container kubepods-burstable-pod728139fe_cc46_4984_ac72_9ce1dec0cacc.slice. May 15 00:29:32.969984 systemd[1]: Created slice kubepods-burstable-pod26cefae5_0258_4e6b_af5c_94d021deb27d.slice - libcontainer container kubepods-burstable-pod26cefae5_0258_4e6b_af5c_94d021deb27d.slice. May 15 00:29:33.039070 kubelet[2443]: I0515 00:29:33.039014 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbh7\" (UniqueName: \"kubernetes.io/projected/728139fe-cc46-4984-ac72-9ce1dec0cacc-kube-api-access-wnbh7\") pod \"coredns-6f6b679f8f-krnfv\" (UID: \"728139fe-cc46-4984-ac72-9ce1dec0cacc\") " pod="kube-system/coredns-6f6b679f8f-krnfv" May 15 00:29:33.039070 kubelet[2443]: I0515 00:29:33.039065 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728139fe-cc46-4984-ac72-9ce1dec0cacc-config-volume\") pod \"coredns-6f6b679f8f-krnfv\" (UID: \"728139fe-cc46-4984-ac72-9ce1dec0cacc\") " pod="kube-system/coredns-6f6b679f8f-krnfv" May 15 00:29:33.039259 kubelet[2443]: I0515 00:29:33.039085 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26cefae5-0258-4e6b-af5c-94d021deb27d-config-volume\") pod \"coredns-6f6b679f8f-g68ph\" (UID: \"26cefae5-0258-4e6b-af5c-94d021deb27d\") " pod="kube-system/coredns-6f6b679f8f-g68ph" May 15 00:29:33.039259 kubelet[2443]: I0515 00:29:33.039110 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppls5\" (UniqueName: \"kubernetes.io/projected/26cefae5-0258-4e6b-af5c-94d021deb27d-kube-api-access-ppls5\") pod \"coredns-6f6b679f8f-g68ph\" (UID: \"26cefae5-0258-4e6b-af5c-94d021deb27d\") " pod="kube-system/coredns-6f6b679f8f-g68ph" May 15 00:29:33.260730 systemd[1]: run-containerd-runc-k8s.io-b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a-runc.VPvUSf.mount: Deactivated successfully. May 15 00:29:33.271070 kubelet[2443]: E0515 00:29:33.269529 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:33.271579 containerd[1443]: time="2025-05-15T00:29:33.271538446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-krnfv,Uid:728139fe-cc46-4984-ac72-9ce1dec0cacc,Namespace:kube-system,Attempt:0,}" May 15 00:29:33.275204 kubelet[2443]: E0515 00:29:33.274987 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:33.275405 containerd[1443]: time="2025-05-15T00:29:33.275370753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g68ph,Uid:26cefae5-0258-4e6b-af5c-94d021deb27d,Namespace:kube-system,Attempt:0,}" May 15 00:29:33.677005 kubelet[2443]: E0515 00:29:33.676969 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:33.691909 kubelet[2443]: I0515 00:29:33.691848 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j5kdt" podStartSLOduration=6.443550339 podStartE2EDuration="15.691828615s" podCreationTimestamp="2025-05-15 00:29:18 +0000 UTC" firstStartedPulling="2025-05-15 00:29:19.933424677 +0000 UTC m=+7.438331703" lastFinishedPulling="2025-05-15 00:29:29.181702953 +0000 UTC m=+16.686609979" observedRunningTime="2025-05-15 00:29:33.691732523 +0000 UTC m=+21.196639549" watchObservedRunningTime="2025-05-15 00:29:33.691828615 +0000 UTC m=+21.196735641" May 15 00:29:34.679128 kubelet[2443]: E0515 00:29:34.679038 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:34.958922 systemd-networkd[1377]: cilium_host: Link UP May 15 00:29:34.959054 systemd-networkd[1377]: cilium_net: Link UP May 15 00:29:34.959186 systemd-networkd[1377]: cilium_net: Gained carrier May 15 00:29:34.959747 systemd-networkd[1377]: cilium_host: Gained carrier May 15 00:29:35.050425 systemd-networkd[1377]: cilium_vxlan: Link UP May 15 00:29:35.050442 systemd-networkd[1377]: cilium_vxlan: Gained carrier May 15 00:29:35.050672 systemd-networkd[1377]: cilium_net: Gained IPv6LL May 15 00:29:35.404409 kernel: NET: Registered PF_ALG protocol family May 15 00:29:35.546344 systemd-networkd[1377]: cilium_host: Gained IPv6LL May 15 00:29:35.681062 kubelet[2443]: E0515 00:29:35.680921 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:35.994892 systemd-networkd[1377]: lxc_health: Link UP May 15 00:29:36.001190 systemd-networkd[1377]: lxc_health: Gained carrier May 15 00:29:36.384798 systemd-networkd[1377]: lxc159687012db8: Link UP May 15 00:29:36.395318 kernel: eth0: renamed from tmp249e4 May 15 00:29:36.405672 systemd-networkd[1377]: lxc96f16ff4e7f7: Link UP May 15 00:29:36.418940 systemd-networkd[1377]: lxc159687012db8: Gained carrier May 15 00:29:36.419254 kernel: eth0: renamed from tmpd8d50 May 15 00:29:36.424944 systemd-networkd[1377]: lxc96f16ff4e7f7: Gained carrier May 15 00:29:36.954424 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL May 15 00:29:37.723339 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 15 00:29:37.862153 kubelet[2443]: E0515 00:29:37.862121 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:37.914372 systemd-networkd[1377]: lxc159687012db8: Gained IPv6LL May 15 00:29:37.978382 systemd-networkd[1377]: lxc96f16ff4e7f7: Gained IPv6LL May 15 00:29:38.686472 kubelet[2443]: E0515 00:29:38.686443 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:39.689756 kubelet[2443]: E0515 00:29:39.689725 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:40.082006 containerd[1443]: time="2025-05-15T00:29:40.081502693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:40.082006 containerd[1443]: time="2025-05-15T00:29:40.081602062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:40.082006 containerd[1443]: time="2025-05-15T00:29:40.081637625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:40.082006 containerd[1443]: time="2025-05-15T00:29:40.081751915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:40.100763 containerd[1443]: time="2025-05-15T00:29:40.100670851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:40.100763 containerd[1443]: time="2025-05-15T00:29:40.100729856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:40.100763 containerd[1443]: time="2025-05-15T00:29:40.100741137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:40.101001 containerd[1443]: time="2025-05-15T00:29:40.100823945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:40.105426 systemd[1]: Started cri-containerd-d8d50f28ebc3d71c74107f83cf5c31890f01ec0b44213d7fa7ec367fdb29c55a.scope - libcontainer container d8d50f28ebc3d71c74107f83cf5c31890f01ec0b44213d7fa7ec367fdb29c55a. May 15 00:29:40.117522 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:40.118038 systemd[1]: Started cri-containerd-249e48c874901be3c5eb078e308a7c41cb1fd44acb540c51fb0da91618e103c2.scope - libcontainer container 249e48c874901be3c5eb078e308a7c41cb1fd44acb540c51fb0da91618e103c2. May 15 00:29:40.131873 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:40.135939 containerd[1443]: time="2025-05-15T00:29:40.135870566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g68ph,Uid:26cefae5-0258-4e6b-af5c-94d021deb27d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8d50f28ebc3d71c74107f83cf5c31890f01ec0b44213d7fa7ec367fdb29c55a\"" May 15 00:29:40.137439 kubelet[2443]: E0515 00:29:40.137418 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:40.139891 containerd[1443]: time="2025-05-15T00:29:40.139764075Z" level=info msg="CreateContainer within sandbox \"d8d50f28ebc3d71c74107f83cf5c31890f01ec0b44213d7fa7ec367fdb29c55a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:29:40.152313 containerd[1443]: time="2025-05-15T00:29:40.152270115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-krnfv,Uid:728139fe-cc46-4984-ac72-9ce1dec0cacc,Namespace:kube-system,Attempt:0,} returns sandbox id \"249e48c874901be3c5eb078e308a7c41cb1fd44acb540c51fb0da91618e103c2\"" May 15 00:29:40.153163 kubelet[2443]: E0515 00:29:40.153141 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:40.155253 containerd[1443]: time="2025-05-15T00:29:40.155079527Z" level=info msg="CreateContainer within sandbox \"249e48c874901be3c5eb078e308a7c41cb1fd44acb540c51fb0da91618e103c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:29:40.155637 containerd[1443]: time="2025-05-15T00:29:40.155250782Z" level=info msg="CreateContainer within sandbox \"d8d50f28ebc3d71c74107f83cf5c31890f01ec0b44213d7fa7ec367fdb29c55a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2a2e641d2c83191e08db28363ec3f435c84d4c287bd5bb02e38a8b6543e75ab\"" May 15 00:29:40.156636 containerd[1443]: time="2025-05-15T00:29:40.156611544Z" level=info msg="StartContainer for \"c2a2e641d2c83191e08db28363ec3f435c84d4c287bd5bb02e38a8b6543e75ab\"" May 15 00:29:40.168600 containerd[1443]: time="2025-05-15T00:29:40.168555535Z" level=info msg="CreateContainer within sandbox \"249e48c874901be3c5eb078e308a7c41cb1fd44acb540c51fb0da91618e103c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9e0e4cf4a3fe4f48b515b86b9ae9a003c122e480180fe48bd32f556aa1cb03b\"" May 15 00:29:40.172040 containerd[1443]: time="2025-05-15T00:29:40.171099243Z" level=info msg="StartContainer for \"c9e0e4cf4a3fe4f48b515b86b9ae9a003c122e480180fe48bd32f556aa1cb03b\"" May 15 00:29:40.190463 systemd[1]: Started cri-containerd-c2a2e641d2c83191e08db28363ec3f435c84d4c287bd5bb02e38a8b6543e75ab.scope - libcontainer container c2a2e641d2c83191e08db28363ec3f435c84d4c287bd5bb02e38a8b6543e75ab. May 15 00:29:40.195961 systemd[1]: Started cri-containerd-c9e0e4cf4a3fe4f48b515b86b9ae9a003c122e480180fe48bd32f556aa1cb03b.scope - libcontainer container c9e0e4cf4a3fe4f48b515b86b9ae9a003c122e480180fe48bd32f556aa1cb03b. May 15 00:29:40.268988 containerd[1443]: time="2025-05-15T00:29:40.268921370Z" level=info msg="StartContainer for \"c2a2e641d2c83191e08db28363ec3f435c84d4c287bd5bb02e38a8b6543e75ab\" returns successfully" May 15 00:29:40.270884 containerd[1443]: time="2025-05-15T00:29:40.269029619Z" level=info msg="StartContainer for \"c9e0e4cf4a3fe4f48b515b86b9ae9a003c122e480180fe48bd32f556aa1cb03b\" returns successfully" May 15 00:29:40.689985 kubelet[2443]: E0515 00:29:40.689946 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:40.694558 kubelet[2443]: E0515 00:29:40.692179 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:40.727259 kubelet[2443]: I0515 00:29:40.727149 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-krnfv" podStartSLOduration=21.727126994 podStartE2EDuration="21.727126994s" podCreationTimestamp="2025-05-15 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:40.705149624 +0000 UTC m=+28.210056650" watchObservedRunningTime="2025-05-15 00:29:40.727126994 +0000 UTC m=+28.232034100" May 15 00:29:41.694492 kubelet[2443]: E0515 00:29:41.694457 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:41.694865 kubelet[2443]: E0515 00:29:41.694592 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:42.696461 kubelet[2443]: E0515 00:29:42.695821 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:42.696461 kubelet[2443]: E0515 00:29:42.696383 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:42.919514 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). May 15 00:29:42.959192 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:42.961037 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:42.964838 systemd-logind[1417]: New session 8 of user core. May 15 00:29:42.976381 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:29:43.097104 sshd[3861]: pam_unix(sshd:session): session closed for user core May 15 00:29:43.100650 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:60554.service: Deactivated successfully. May 15 00:29:43.102407 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:29:43.103048 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. May 15 00:29:43.104060 systemd-logind[1417]: Removed session 8. May 15 00:29:48.115860 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). May 15 00:29:48.152534 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:48.153930 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:48.160435 systemd-logind[1417]: New session 9 of user core. May 15 00:29:48.174424 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:29:48.294307 sshd[3877]: pam_unix(sshd:session): session closed for user core May 15 00:29:48.297627 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:60558.service: Deactivated successfully. May 15 00:29:48.301084 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:29:48.303821 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. May 15 00:29:48.305111 systemd-logind[1417]: Removed session 9. May 15 00:29:53.307138 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:37256.service - OpenSSH per-connection server daemon (10.0.0.1:37256). May 15 00:29:53.351722 sshd[3894]: Accepted publickey for core from 10.0.0.1 port 37256 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:53.353109 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:53.357073 systemd-logind[1417]: New session 10 of user core. May 15 00:29:53.368444 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:29:53.483509 sshd[3894]: pam_unix(sshd:session): session closed for user core May 15 00:29:53.493324 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:37256.service: Deactivated successfully. May 15 00:29:53.495821 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:29:53.497111 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. May 15 00:29:53.507613 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). May 15 00:29:53.509534 systemd-logind[1417]: Removed session 10. May 15 00:29:53.546265 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:53.547940 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:53.552780 systemd-logind[1417]: New session 11 of user core. May 15 00:29:53.563412 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:29:53.721520 sshd[3911]: pam_unix(sshd:session): session closed for user core May 15 00:29:53.733204 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:37270.service: Deactivated successfully. May 15 00:29:53.737803 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:29:53.740613 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. May 15 00:29:53.761600 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:37278.service - OpenSSH per-connection server daemon (10.0.0.1:37278). May 15 00:29:53.763595 systemd-logind[1417]: Removed session 11. May 15 00:29:53.798425 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 37278 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:53.799699 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:53.803541 systemd-logind[1417]: New session 12 of user core. May 15 00:29:53.812484 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:29:53.925737 sshd[3924]: pam_unix(sshd:session): session closed for user core May 15 00:29:53.929153 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:37278.service: Deactivated successfully. May 15 00:29:53.931399 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:29:53.932718 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. May 15 00:29:53.936296 systemd-logind[1417]: Removed session 12. May 15 00:29:58.939737 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). May 15 00:29:58.987982 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:58.989941 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:58.995412 systemd-logind[1417]: New session 13 of user core. May 15 00:29:59.005450 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:29:59.125299 sshd[3940]: pam_unix(sshd:session): session closed for user core May 15 00:29:59.129875 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:37294.service: Deactivated successfully. May 15 00:29:59.131898 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:29:59.133581 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. May 15 00:29:59.135151 systemd-logind[1417]: Removed session 13. May 15 00:30:04.140060 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). May 15 00:30:04.186914 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:04.188446 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:04.192662 systemd-logind[1417]: New session 14 of user core. May 15 00:30:04.205452 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:30:04.314559 sshd[3954]: pam_unix(sshd:session): session closed for user core May 15 00:30:04.330022 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:46794.service: Deactivated successfully. May 15 00:30:04.331723 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:30:04.334457 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. May 15 00:30:04.340601 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:46806.service - OpenSSH per-connection server daemon (10.0.0.1:46806). May 15 00:30:04.341565 systemd-logind[1417]: Removed session 14. May 15 00:30:04.378241 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 46806 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:04.379799 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:04.383746 systemd-logind[1417]: New session 15 of user core. May 15 00:30:04.390430 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:30:04.624908 sshd[3968]: pam_unix(sshd:session): session closed for user core May 15 00:30:04.633584 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:46806.service: Deactivated successfully. May 15 00:30:04.635311 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:30:04.636979 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. May 15 00:30:04.649677 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:46808.service - OpenSSH per-connection server daemon (10.0.0.1:46808). May 15 00:30:04.651533 systemd-logind[1417]: Removed session 15. May 15 00:30:04.690700 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 46808 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:04.692090 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:04.696015 systemd-logind[1417]: New session 16 of user core. May 15 00:30:04.700415 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:30:06.074559 sshd[3980]: pam_unix(sshd:session): session closed for user core May 15 00:30:06.086840 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:46808.service: Deactivated successfully. May 15 00:30:06.088323 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:30:06.092320 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. May 15 00:30:06.102167 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:46822.service - OpenSSH per-connection server daemon (10.0.0.1:46822). May 15 00:30:06.104281 systemd-logind[1417]: Removed session 16. May 15 00:30:06.143111 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 46822 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:06.144753 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:06.148672 systemd-logind[1417]: New session 17 of user core. May 15 00:30:06.162421 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:30:06.390131 sshd[4001]: pam_unix(sshd:session): session closed for user core May 15 00:30:06.398786 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:46822.service: Deactivated successfully. May 15 00:30:06.400569 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:30:06.403630 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. May 15 00:30:06.413560 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:46830.service - OpenSSH per-connection server daemon (10.0.0.1:46830). May 15 00:30:06.415375 systemd-logind[1417]: Removed session 17. May 15 00:30:06.450093 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 46830 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:06.451527 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:06.455309 systemd-logind[1417]: New session 18 of user core. May 15 00:30:06.464711 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:30:06.577770 sshd[4014]: pam_unix(sshd:session): session closed for user core May 15 00:30:06.581404 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:46830.service: Deactivated successfully. May 15 00:30:06.584859 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:30:06.585794 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. May 15 00:30:06.586623 systemd-logind[1417]: Removed session 18. May 15 00:30:11.592069 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:46832.service - OpenSSH per-connection server daemon (10.0.0.1:46832). May 15 00:30:11.634266 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 46832 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:11.635096 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:11.639664 systemd-logind[1417]: New session 19 of user core. May 15 00:30:11.650422 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:30:11.765530 sshd[4031]: pam_unix(sshd:session): session closed for user core May 15 00:30:11.769167 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:46832.service: Deactivated successfully. May 15 00:30:11.771195 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:30:11.772879 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. May 15 00:30:11.773858 systemd-logind[1417]: Removed session 19. May 15 00:30:16.777335 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:36108.service - OpenSSH per-connection server daemon (10.0.0.1:36108). May 15 00:30:16.821572 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 36108 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:16.822919 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:16.827016 systemd-logind[1417]: New session 20 of user core. May 15 00:30:16.835423 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:30:16.949435 sshd[4047]: pam_unix(sshd:session): session closed for user core May 15 00:30:16.952331 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:36108.service: Deactivated successfully. May 15 00:30:16.953869 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:30:16.955896 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. May 15 00:30:16.957705 systemd-logind[1417]: Removed session 20. May 15 00:30:21.963402 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:36124.service - OpenSSH per-connection server daemon (10.0.0.1:36124). May 15 00:30:22.010282 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 36124 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:22.011614 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:22.015859 systemd-logind[1417]: New session 21 of user core. May 15 00:30:22.032449 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:30:22.149563 sshd[4064]: pam_unix(sshd:session): session closed for user core May 15 00:30:22.156911 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:36124.service: Deactivated successfully. May 15 00:30:22.159086 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:30:22.160554 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. May 15 00:30:22.171520 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:36132.service - OpenSSH per-connection server daemon (10.0.0.1:36132). May 15 00:30:22.172975 systemd-logind[1417]: Removed session 21. May 15 00:30:22.212574 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 36132 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:22.213905 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:22.220152 systemd-logind[1417]: New session 22 of user core. May 15 00:30:22.233428 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:30:23.603504 kubelet[2443]: E0515 00:30:23.603462 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:24.096418 kubelet[2443]: I0515 00:30:24.096058 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-g68ph" podStartSLOduration=65.096040101 podStartE2EDuration="1m5.096040101s" podCreationTimestamp="2025-05-15 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:40.744120757 +0000 UTC m=+28.249027783" watchObservedRunningTime="2025-05-15 00:30:24.096040101 +0000 UTC m=+71.600947127" May 15 00:30:24.104548 containerd[1443]: time="2025-05-15T00:30:24.104503000Z" level=info msg="StopContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" with timeout 30 (s)" May 15 00:30:24.106039 containerd[1443]: time="2025-05-15T00:30:24.104976275Z" level=info msg="Stop container \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" with signal terminated" May 15 00:30:24.124553 systemd[1]: cri-containerd-82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f.scope: Deactivated successfully. May 15 00:30:24.151240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f-rootfs.mount: Deactivated successfully. May 15 00:30:24.154491 containerd[1443]: time="2025-05-15T00:30:24.154448208Z" level=info msg="StopContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" with timeout 2 (s)" May 15 00:30:24.155166 containerd[1443]: time="2025-05-15T00:30:24.155133960Z" level=info msg="Stop container \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" with signal terminated" May 15 00:30:24.163539 systemd-networkd[1377]: lxc_health: Link DOWN May 15 00:30:24.163551 systemd-networkd[1377]: lxc_health: Lost carrier May 15 00:30:24.167390 containerd[1443]: time="2025-05-15T00:30:24.166626143Z" level=info msg="shim disconnected" id=82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f namespace=k8s.io May 15 00:30:24.167390 containerd[1443]: time="2025-05-15T00:30:24.166680463Z" level=warning msg="cleaning up after shim disconnected" id=82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f namespace=k8s.io May 15 00:30:24.167390 containerd[1443]: time="2025-05-15T00:30:24.166688222Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:24.169278 containerd[1443]: time="2025-05-15T00:30:24.168689999Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:30:24.192093 systemd[1]: cri-containerd-b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a.scope: Deactivated successfully. May 15 00:30:24.192668 systemd[1]: cri-containerd-b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a.scope: Consumed 6.701s CPU time. May 15 00:30:24.212019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a-rootfs.mount: Deactivated successfully. May 15 00:30:24.218075 containerd[1443]: time="2025-05-15T00:30:24.217838615Z" level=info msg="shim disconnected" id=b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a namespace=k8s.io May 15 00:30:24.218075 containerd[1443]: time="2025-05-15T00:30:24.217893775Z" level=warning msg="cleaning up after shim disconnected" id=b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a namespace=k8s.io May 15 00:30:24.218075 containerd[1443]: time="2025-05-15T00:30:24.217902935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:24.224974 containerd[1443]: time="2025-05-15T00:30:24.224927531Z" level=info msg="StopContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" returns successfully" May 15 00:30:24.226000 containerd[1443]: time="2025-05-15T00:30:24.225796481Z" level=info msg="StopPodSandbox for \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\"" May 15 00:30:24.226000 containerd[1443]: time="2025-05-15T00:30:24.225848640Z" level=info msg="Container to stop \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.228838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6-shm.mount: Deactivated successfully. May 15 00:30:24.231845 containerd[1443]: time="2025-05-15T00:30:24.231774850Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:30:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:30:24.237126 containerd[1443]: time="2025-05-15T00:30:24.237040828Z" level=info msg="StopContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" returns successfully" May 15 00:30:24.237630 containerd[1443]: time="2025-05-15T00:30:24.237573101Z" level=info msg="StopPodSandbox for \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\"" May 15 00:30:24.237630 containerd[1443]: time="2025-05-15T00:30:24.237627221Z" level=info msg="Container to stop \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.237732 containerd[1443]: time="2025-05-15T00:30:24.237640540Z" level=info msg="Container to stop \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.237732 containerd[1443]: time="2025-05-15T00:30:24.237650220Z" level=info msg="Container to stop \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.237732 containerd[1443]: time="2025-05-15T00:30:24.237660940Z" level=info msg="Container to stop \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.237732 containerd[1443]: time="2025-05-15T00:30:24.237670380Z" level=info msg="Container to stop \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:30:24.240811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116-shm.mount: Deactivated successfully. May 15 00:30:24.242523 systemd[1]: cri-containerd-4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6.scope: Deactivated successfully. May 15 00:30:24.244739 systemd[1]: cri-containerd-ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116.scope: Deactivated successfully. May 15 00:30:24.276435 containerd[1443]: time="2025-05-15T00:30:24.276347121Z" level=info msg="shim disconnected" id=4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6 namespace=k8s.io May 15 00:30:24.276435 containerd[1443]: time="2025-05-15T00:30:24.276414400Z" level=warning msg="cleaning up after shim disconnected" id=4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6 namespace=k8s.io May 15 00:30:24.276435 containerd[1443]: time="2025-05-15T00:30:24.276452040Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:24.277668 containerd[1443]: time="2025-05-15T00:30:24.277531187Z" level=info msg="shim disconnected" id=ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116 namespace=k8s.io May 15 00:30:24.277668 containerd[1443]: time="2025-05-15T00:30:24.277659386Z" level=warning msg="cleaning up after shim disconnected" id=ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116 namespace=k8s.io May 15 00:30:24.277668 containerd[1443]: time="2025-05-15T00:30:24.277670705Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:24.291681 containerd[1443]: time="2025-05-15T00:30:24.291563621Z" level=info msg="TearDown network for sandbox \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\" successfully" May 15 00:30:24.291681 containerd[1443]: time="2025-05-15T00:30:24.291599580Z" level=info msg="StopPodSandbox for \"4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6\" returns successfully" May 15 00:30:24.299941 containerd[1443]: time="2025-05-15T00:30:24.299892322Z" level=info msg="TearDown network for sandbox \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" successfully" May 15 00:30:24.299941 containerd[1443]: time="2025-05-15T00:30:24.299931201Z" level=info msg="StopPodSandbox for \"ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116\" returns successfully" May 15 00:30:24.368514 kubelet[2443]: I0515 00:30:24.368337 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cni-path\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368514 kubelet[2443]: I0515 00:30:24.368374 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-hubble-tls\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368514 kubelet[2443]: I0515 00:30:24.368497 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-kernel\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368514 kubelet[2443]: I0515 00:30:24.368519 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-hostproc\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368730 kubelet[2443]: I0515 00:30:24.368538 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-config-path\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368730 kubelet[2443]: I0515 00:30:24.368554 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-etc-cni-netd\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.368730 kubelet[2443]: I0515 00:30:24.368567 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-cgroup\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.369284 kubelet[2443]: I0515 00:30:24.368583 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-lib-modules\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.370985 kubelet[2443]: I0515 00:30:24.370960 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-xtables-lock\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371067 kubelet[2443]: I0515 00:30:24.370993 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-cilium-config-path\") pod \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\" (UID: \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\") " May 15 00:30:24.371067 kubelet[2443]: I0515 00:30:24.371015 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-net\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371067 kubelet[2443]: I0515 00:30:24.371031 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-run\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371067 kubelet[2443]: I0515 00:30:24.371048 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-bpf-maps\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371067 kubelet[2443]: I0515 00:30:24.371065 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdb5b\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371209 kubelet[2443]: I0515 00:30:24.371084 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km6zw\" (UniqueName: \"kubernetes.io/projected/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-kube-api-access-km6zw\") pod \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\" (UID: \"d52a7fbe-24ee-4975-8b96-a0f60f0788bf\") " May 15 00:30:24.371209 kubelet[2443]: I0515 00:30:24.371105 2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d047a8-afe1-43b8-8318-19e53eabb68f-clustermesh-secrets\") pod \"40d047a8-afe1-43b8-8318-19e53eabb68f\" (UID: \"40d047a8-afe1-43b8-8318-19e53eabb68f\") " May 15 00:30:24.371841 kubelet[2443]: I0515 00:30:24.370659 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.371906 kubelet[2443]: I0515 00:30:24.370664 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-hostproc" (OuterVolumeSpecName: "hostproc") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.371906 kubelet[2443]: I0515 00:30:24.370916 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.371906 kubelet[2443]: I0515 00:30:24.370933 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.371906 kubelet[2443]: I0515 00:30:24.371779 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cni-path" (OuterVolumeSpecName: "cni-path") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.371906 kubelet[2443]: I0515 00:30:24.371808 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.372026 kubelet[2443]: I0515 00:30:24.371888 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.374597 kubelet[2443]: I0515 00:30:24.373731 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d52a7fbe-24ee-4975-8b96-a0f60f0788bf" (UID: "d52a7fbe-24ee-4975-8b96-a0f60f0788bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:30:24.374597 kubelet[2443]: I0515 00:30:24.373793 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.374597 kubelet[2443]: I0515 00:30:24.373809 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.374597 kubelet[2443]: I0515 00:30:24.373836 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:30:24.380175 kubelet[2443]: I0515 00:30:24.380132 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b" (OuterVolumeSpecName: "kube-api-access-xdb5b") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "kube-api-access-xdb5b". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:30:24.381147 kubelet[2443]: I0515 00:30:24.381118 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d047a8-afe1-43b8-8318-19e53eabb68f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:30:24.381756 kubelet[2443]: I0515 00:30:24.381719 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:30:24.383301 kubelet[2443]: I0515 00:30:24.382730 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-kube-api-access-km6zw" (OuterVolumeSpecName: "kube-api-access-km6zw") pod "d52a7fbe-24ee-4975-8b96-a0f60f0788bf" (UID: "d52a7fbe-24ee-4975-8b96-a0f60f0788bf"). InnerVolumeSpecName "kube-api-access-km6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:30:24.384274 kubelet[2443]: I0515 00:30:24.384243 2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "40d047a8-afe1-43b8-8318-19e53eabb68f" (UID: "40d047a8-afe1-43b8-8318-19e53eabb68f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472185 2443 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472230 2443 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472241 2443 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472249 2443 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472263 2443 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472277 2443 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472291 2443 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472481 kubelet[2443]: I0515 00:30:24.472306 2443 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472320 2443 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472332 2443 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472346 2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xdb5b\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-kube-api-access-xdb5b\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472360 2443 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472373 2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-km6zw\" (UniqueName: \"kubernetes.io/projected/d52a7fbe-24ee-4975-8b96-a0f60f0788bf-kube-api-access-km6zw\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472387 2443 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d047a8-afe1-43b8-8318-19e53eabb68f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472400 2443 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d047a8-afe1-43b8-8318-19e53eabb68f-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.472770 kubelet[2443]: I0515 00:30:24.472413 2443 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d047a8-afe1-43b8-8318-19e53eabb68f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:30:24.604923 kubelet[2443]: E0515 00:30:24.604548 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:24.616553 systemd[1]: Removed slice kubepods-burstable-pod40d047a8_afe1_43b8_8318_19e53eabb68f.slice - libcontainer container kubepods-burstable-pod40d047a8_afe1_43b8_8318_19e53eabb68f.slice. May 15 00:30:24.616708 systemd[1]: kubepods-burstable-pod40d047a8_afe1_43b8_8318_19e53eabb68f.slice: Consumed 6.852s CPU time. May 15 00:30:24.619007 systemd[1]: Removed slice kubepods-besteffort-podd52a7fbe_24ee_4975_8b96_a0f60f0788bf.slice - libcontainer container kubepods-besteffort-podd52a7fbe_24ee_4975_8b96_a0f60f0788bf.slice. May 15 00:30:24.795353 kubelet[2443]: I0515 00:30:24.795312 2443 scope.go:117] "RemoveContainer" containerID="82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f" May 15 00:30:24.797242 containerd[1443]: time="2025-05-15T00:30:24.797187900Z" level=info msg="RemoveContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\"" May 15 00:30:24.802783 containerd[1443]: time="2025-05-15T00:30:24.802737114Z" level=info msg="RemoveContainer for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" returns successfully" May 15 00:30:24.804046 kubelet[2443]: I0515 00:30:24.803666 2443 scope.go:117] "RemoveContainer" containerID="82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f" May 15 00:30:24.805725 containerd[1443]: time="2025-05-15T00:30:24.805334204Z" level=error msg="ContainerStatus for \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\": not found" May 15 00:30:24.821311 kubelet[2443]: E0515 00:30:24.821262 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\": not found" containerID="82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f" May 15 00:30:24.821497 kubelet[2443]: I0515 00:30:24.821320 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f"} err="failed to get container status \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\": rpc error: code = NotFound desc = an error occurred when try to find container \"82a60dee8c9d90e6dab05a865e93f803a2d67eba752fda418514809971bdb50f\": not found" May 15 00:30:24.821497 kubelet[2443]: I0515 00:30:24.821402 2443 scope.go:117] "RemoveContainer" containerID="b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a" May 15 00:30:24.822638 containerd[1443]: time="2025-05-15T00:30:24.822594999Z" level=info msg="RemoveContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\"" May 15 00:30:24.838370 containerd[1443]: time="2025-05-15T00:30:24.838200814Z" level=info msg="RemoveContainer for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" returns successfully" May 15 00:30:24.838728 kubelet[2443]: I0515 00:30:24.838507 2443 scope.go:117] "RemoveContainer" containerID="833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef" May 15 00:30:24.839642 containerd[1443]: time="2025-05-15T00:30:24.839579277Z" level=info msg="RemoveContainer for \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\"" May 15 00:30:24.842554 containerd[1443]: time="2025-05-15T00:30:24.842465723Z" level=info msg="RemoveContainer for \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\" returns successfully" May 15 00:30:24.842786 kubelet[2443]: I0515 00:30:24.842692 2443 scope.go:117] "RemoveContainer" containerID="d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882" May 15 00:30:24.843822 containerd[1443]: time="2025-05-15T00:30:24.843791547Z" level=info msg="RemoveContainer for \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\"" May 15 00:30:24.846255 containerd[1443]: time="2025-05-15T00:30:24.846144319Z" level=info msg="RemoveContainer for \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\" returns successfully" May 15 00:30:24.846326 kubelet[2443]: I0515 00:30:24.846312 2443 scope.go:117] "RemoveContainer" containerID="53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96" May 15 00:30:24.847479 containerd[1443]: time="2025-05-15T00:30:24.847436424Z" level=info msg="RemoveContainer for \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\"" May 15 00:30:24.850001 containerd[1443]: time="2025-05-15T00:30:24.849953834Z" level=info msg="RemoveContainer for \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\" returns successfully" May 15 00:30:24.850190 kubelet[2443]: I0515 00:30:24.850140 2443 scope.go:117] "RemoveContainer" containerID="88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07" May 15 00:30:24.854358 containerd[1443]: time="2025-05-15T00:30:24.854275863Z" level=info msg="RemoveContainer for \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\"" May 15 00:30:24.857654 containerd[1443]: time="2025-05-15T00:30:24.857544784Z" level=info msg="RemoveContainer for \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\" returns successfully" May 15 00:30:24.857941 kubelet[2443]: I0515 00:30:24.857738 2443 scope.go:117] "RemoveContainer" containerID="b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a" May 15 00:30:24.858379 containerd[1443]: time="2025-05-15T00:30:24.858167937Z" level=error msg="ContainerStatus for \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\": not found" May 15 00:30:24.858452 kubelet[2443]: E0515 00:30:24.858343 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\": not found" containerID="b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a" May 15 00:30:24.858452 kubelet[2443]: I0515 00:30:24.858372 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a"} err="failed to get container status \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b73ca27de19d4276429abcb6d17110d564357894d12463836f47164f88a3113a\": not found" May 15 00:30:24.858452 kubelet[2443]: I0515 00:30:24.858391 2443 scope.go:117] "RemoveContainer" containerID="833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef" May 15 00:30:24.858985 containerd[1443]: time="2025-05-15T00:30:24.858699490Z" level=error msg="ContainerStatus for \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\": not found" May 15 00:30:24.858985 containerd[1443]: time="2025-05-15T00:30:24.859093886Z" level=error msg="ContainerStatus for \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\": not found" May 15 00:30:24.859267 kubelet[2443]: E0515 00:30:24.858892 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\": not found" containerID="833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef" May 15 00:30:24.859267 kubelet[2443]: I0515 00:30:24.858918 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef"} err="failed to get container status \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\": rpc error: code = NotFound desc = an error occurred when try to find container \"833fd89bbf337a76b02cc9eabf2244c9175a6ff8970fcbc9d65c0dbf02f84fef\": not found" May 15 00:30:24.859267 kubelet[2443]: I0515 00:30:24.858932 2443 scope.go:117] "RemoveContainer" containerID="d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882" May 15 00:30:24.859267 kubelet[2443]: E0515 00:30:24.859177 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\": not found" containerID="d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882" May 15 00:30:24.859267 kubelet[2443]: I0515 00:30:24.859193 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882"} err="failed to get container status \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5bc6875f6d94ce49daa4233b5c1d5e909307f5f567f82199a8ed63c68b68882\": not found" May 15 00:30:24.859267 kubelet[2443]: I0515 00:30:24.859204 2443 scope.go:117] "RemoveContainer" containerID="53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96" May 15 00:30:24.859794 containerd[1443]: time="2025-05-15T00:30:24.859669399Z" level=error msg="ContainerStatus for \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\": not found" May 15 00:30:24.859863 kubelet[2443]: E0515 00:30:24.859791 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\": not found" containerID="53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96" May 15 00:30:24.859863 kubelet[2443]: I0515 00:30:24.859808 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96"} err="failed to get container status \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\": rpc error: code = NotFound desc = an error occurred when try to find container \"53bffc674776f31094f726f4b5da8d6399c776ac5600c93469e2d40cbf76dd96\": not found" May 15 00:30:24.859863 kubelet[2443]: I0515 00:30:24.859831 2443 scope.go:117] "RemoveContainer" containerID="88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07" May 15 00:30:24.860420 kubelet[2443]: E0515 00:30:24.860116 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\": not found" containerID="88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07" May 15 00:30:24.860420 kubelet[2443]: I0515 00:30:24.860141 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07"} err="failed to get container status \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\": rpc error: code = NotFound desc = an error occurred when try to find container \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\": not found" May 15 00:30:24.860476 containerd[1443]: time="2025-05-15T00:30:24.859991275Z" level=error msg="ContainerStatus for \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88677cda2a4635d74ddd005b553d2906bdd6144d940e1004cd1992746a076d07\": not found" May 15 00:30:25.123197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab42a324837b13bb2334027f50d8c704af97a429435552e1b164785b141a8116-rootfs.mount: Deactivated successfully. May 15 00:30:25.123307 systemd[1]: var-lib-kubelet-pods-40d047a8\x2dafe1\x2d43b8\x2d8318\x2d19e53eabb68f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdb5b.mount: Deactivated successfully. May 15 00:30:25.123360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4605e5b52580a220e0d9ed0e988cfec3a265b030b135f86301376403d56e8ab6-rootfs.mount: Deactivated successfully. May 15 00:30:25.123422 systemd[1]: var-lib-kubelet-pods-d52a7fbe\x2d24ee\x2d4975\x2d8b96\x2da0f60f0788bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkm6zw.mount: Deactivated successfully. May 15 00:30:25.123478 systemd[1]: var-lib-kubelet-pods-40d047a8\x2dafe1\x2d43b8\x2d8318\x2d19e53eabb68f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:30:25.123531 systemd[1]: var-lib-kubelet-pods-40d047a8\x2dafe1\x2d43b8\x2d8318\x2d19e53eabb68f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:30:26.061166 sshd[4079]: pam_unix(sshd:session): session closed for user core May 15 00:30:26.069011 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:36132.service: Deactivated successfully. May 15 00:30:26.070665 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:30:26.073319 systemd[1]: session-22.scope: Consumed 1.179s CPU time. May 15 00:30:26.078190 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. May 15 00:30:26.092628 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). May 15 00:30:26.094087 systemd-logind[1417]: Removed session 22. May 15 00:30:26.135303 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:26.136821 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:26.142358 systemd-logind[1417]: New session 23 of user core. May 15 00:30:26.151458 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:30:26.605427 kubelet[2443]: I0515 00:30:26.605371 2443 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" path="/var/lib/kubelet/pods/40d047a8-afe1-43b8-8318-19e53eabb68f/volumes" May 15 00:30:26.605987 kubelet[2443]: I0515 00:30:26.605954 2443 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52a7fbe-24ee-4975-8b96-a0f60f0788bf" path="/var/lib/kubelet/pods/d52a7fbe-24ee-4975-8b96-a0f60f0788bf/volumes" May 15 00:30:27.292616 sshd[4241]: pam_unix(sshd:session): session closed for user core May 15 00:30:27.301298 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:50246.service: Deactivated successfully. May 15 00:30:27.303496 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:30:27.303695 systemd[1]: session-23.scope: Consumed 1.022s CPU time. May 15 00:30:27.305229 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. May 15 00:30:27.310961 kubelet[2443]: E0515 00:30:27.310918 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52a7fbe-24ee-4975-8b96-a0f60f0788bf" containerName="cilium-operator" May 15 00:30:27.310961 kubelet[2443]: E0515 00:30:27.310953 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="mount-bpf-fs" May 15 00:30:27.310961 kubelet[2443]: E0515 00:30:27.310962 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="mount-cgroup" May 15 00:30:27.310961 kubelet[2443]: E0515 00:30:27.310967 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="apply-sysctl-overwrites" May 15 00:30:27.310961 kubelet[2443]: E0515 00:30:27.310973 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="clean-cilium-state" May 15 00:30:27.311276 kubelet[2443]: E0515 00:30:27.310980 2443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="cilium-agent" May 15 00:30:27.311276 kubelet[2443]: I0515 00:30:27.311004 2443 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52a7fbe-24ee-4975-8b96-a0f60f0788bf" containerName="cilium-operator" May 15 00:30:27.311276 kubelet[2443]: I0515 00:30:27.311010 2443 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d047a8-afe1-43b8-8318-19e53eabb68f" containerName="cilium-agent" May 15 00:30:27.313251 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:50260.service - OpenSSH per-connection server daemon (10.0.0.1:50260). May 15 00:30:27.320935 systemd-logind[1417]: Removed session 23. May 15 00:30:27.330879 systemd[1]: Created slice kubepods-burstable-podf40d48f6_ec0a_42b4_9446_915eedc0ca25.slice - libcontainer container kubepods-burstable-podf40d48f6_ec0a_42b4_9446_915eedc0ca25.slice. May 15 00:30:27.364657 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 50260 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:27.366893 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:27.372172 systemd-logind[1417]: New session 24 of user core. May 15 00:30:27.379480 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391123 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d48f6-ec0a-42b4-9446-915eedc0ca25-hubble-tls\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391173 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-cilium-cgroup\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391195 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-hostproc\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391209 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-cilium-run\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391243 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-xtables-lock\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391505 kubelet[2443]: I0515 00:30:27.391271 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d48f6-ec0a-42b4-9446-915eedc0ca25-cilium-config-path\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391288 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d48f6-ec0a-42b4-9446-915eedc0ca25-clustermesh-secrets\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391302 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f40d48f6-ec0a-42b4-9446-915eedc0ca25-cilium-ipsec-secrets\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391318 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jfj8\" (UniqueName: \"kubernetes.io/projected/f40d48f6-ec0a-42b4-9446-915eedc0ca25-kube-api-access-4jfj8\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391333 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-bpf-maps\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391347 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-cni-path\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391756 kubelet[2443]: I0515 00:30:27.391363 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-etc-cni-netd\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391892 kubelet[2443]: I0515 00:30:27.391377 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-host-proc-sys-kernel\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391892 kubelet[2443]: I0515 00:30:27.391393 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-lib-modules\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.391892 kubelet[2443]: I0515 00:30:27.391407 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d48f6-ec0a-42b4-9446-915eedc0ca25-host-proc-sys-net\") pod \"cilium-8nfw7\" (UID: \"f40d48f6-ec0a-42b4-9446-915eedc0ca25\") " pod="kube-system/cilium-8nfw7" May 15 00:30:27.430141 sshd[4254]: pam_unix(sshd:session): session closed for user core May 15 00:30:27.442022 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:50260.service: Deactivated successfully. May 15 00:30:27.443677 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:30:27.445838 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. May 15 00:30:27.457601 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:50268.service - OpenSSH per-connection server daemon (10.0.0.1:50268). May 15 00:30:27.458657 systemd-logind[1417]: Removed session 24. May 15 00:30:27.493850 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 50268 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:30:27.496435 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:30:27.511143 systemd-logind[1417]: New session 25 of user core. May 15 00:30:27.523438 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:30:27.642254 kubelet[2443]: E0515 00:30:27.640130 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:27.642692 containerd[1443]: time="2025-05-15T00:30:27.640654984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nfw7,Uid:f40d48f6-ec0a-42b4-9446-915eedc0ca25,Namespace:kube-system,Attempt:0,}" May 15 00:30:27.662369 kubelet[2443]: E0515 00:30:27.662317 2443 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:30:27.662954 containerd[1443]: time="2025-05-15T00:30:27.662851600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:30:27.662954 containerd[1443]: time="2025-05-15T00:30:27.662936319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:30:27.663140 containerd[1443]: time="2025-05-15T00:30:27.662952919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:30:27.663140 containerd[1443]: time="2025-05-15T00:30:27.663058918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:30:27.684452 systemd[1]: Started cri-containerd-b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e.scope - libcontainer container b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e. May 15 00:30:27.704673 containerd[1443]: time="2025-05-15T00:30:27.704595214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nfw7,Uid:f40d48f6-ec0a-42b4-9446-915eedc0ca25,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\"" May 15 00:30:27.705375 kubelet[2443]: E0515 00:30:27.705336 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:27.714002 containerd[1443]: time="2025-05-15T00:30:27.713944177Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:30:27.730016 containerd[1443]: time="2025-05-15T00:30:27.729955884Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9\"" May 15 00:30:27.730825 containerd[1443]: time="2025-05-15T00:30:27.730791357Z" level=info msg="StartContainer for \"bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9\"" May 15 00:30:27.759498 systemd[1]: Started cri-containerd-bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9.scope - libcontainer container bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9. May 15 00:30:27.784609 containerd[1443]: time="2025-05-15T00:30:27.784556832Z" level=info msg="StartContainer for \"bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9\" returns successfully" May 15 00:30:27.793673 systemd[1]: cri-containerd-bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9.scope: Deactivated successfully. May 15 00:30:27.809982 kubelet[2443]: E0515 00:30:27.809925 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:27.844900 containerd[1443]: time="2025-05-15T00:30:27.844823333Z" level=info msg="shim disconnected" id=bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9 namespace=k8s.io May 15 00:30:27.844900 containerd[1443]: time="2025-05-15T00:30:27.844895932Z" level=warning msg="cleaning up after shim disconnected" id=bfa96955df81d7d199334667fdad5cd77c237b93e71260189700b1432c0ad9f9 namespace=k8s.io May 15 00:30:27.844900 containerd[1443]: time="2025-05-15T00:30:27.844906172Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:28.812703 kubelet[2443]: E0515 00:30:28.812648 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:28.815179 containerd[1443]: time="2025-05-15T00:30:28.814826527Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:30:28.829614 containerd[1443]: time="2025-05-15T00:30:28.829262984Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc\"" May 15 00:30:28.831795 containerd[1443]: time="2025-05-15T00:30:28.830608614Z" level=info msg="StartContainer for \"35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc\"" May 15 00:30:28.861431 systemd[1]: Started cri-containerd-35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc.scope - libcontainer container 35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc. May 15 00:30:28.883674 containerd[1443]: time="2025-05-15T00:30:28.883632274Z" level=info msg="StartContainer for \"35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc\" returns successfully" May 15 00:30:28.891007 systemd[1]: cri-containerd-35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc.scope: Deactivated successfully. May 15 00:30:28.921385 containerd[1443]: time="2025-05-15T00:30:28.920806368Z" level=info msg="shim disconnected" id=35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc namespace=k8s.io May 15 00:30:28.921385 containerd[1443]: time="2025-05-15T00:30:28.920866688Z" level=warning msg="cleaning up after shim disconnected" id=35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc namespace=k8s.io May 15 00:30:28.921385 containerd[1443]: time="2025-05-15T00:30:28.920876928Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:29.497512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35f1b3e1925b312d062c7a3a8070719fa8708ccb16810ffc5b42a327a9ddc3bc-rootfs.mount: Deactivated successfully. May 15 00:30:29.816449 kubelet[2443]: E0515 00:30:29.816326 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:29.819792 containerd[1443]: time="2025-05-15T00:30:29.819658821Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:30:29.840469 containerd[1443]: time="2025-05-15T00:30:29.840408015Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e\"" May 15 00:30:29.840963 containerd[1443]: time="2025-05-15T00:30:29.840936292Z" level=info msg="StartContainer for \"414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e\"" May 15 00:30:29.870444 systemd[1]: Started cri-containerd-414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e.scope - libcontainer container 414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e. May 15 00:30:29.897682 systemd[1]: cri-containerd-414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e.scope: Deactivated successfully. May 15 00:30:29.898178 containerd[1443]: time="2025-05-15T00:30:29.897967426Z" level=info msg="StartContainer for \"414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e\" returns successfully" May 15 00:30:29.930152 containerd[1443]: time="2025-05-15T00:30:29.930088791Z" level=info msg="shim disconnected" id=414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e namespace=k8s.io May 15 00:30:29.930152 containerd[1443]: time="2025-05-15T00:30:29.930146670Z" level=warning msg="cleaning up after shim disconnected" id=414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e namespace=k8s.io May 15 00:30:29.930152 containerd[1443]: time="2025-05-15T00:30:29.930156830Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:30.497523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414f079c2ce6d8818e4ccbb3a9326b3331e91d611377eaceef6a3cdff6ef868e-rootfs.mount: Deactivated successfully. May 15 00:30:30.820640 kubelet[2443]: E0515 00:30:30.820527 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:30.831452 containerd[1443]: time="2025-05-15T00:30:30.831060311Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:30:30.905024 containerd[1443]: time="2025-05-15T00:30:30.904963499Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda\"" May 15 00:30:30.907158 containerd[1443]: time="2025-05-15T00:30:30.907121329Z" level=info msg="StartContainer for \"32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda\"" May 15 00:30:30.952494 systemd[1]: Started cri-containerd-32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda.scope - libcontainer container 32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda. May 15 00:30:31.011407 systemd[1]: cri-containerd-32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda.scope: Deactivated successfully. May 15 00:30:31.014636 containerd[1443]: time="2025-05-15T00:30:31.014583363Z" level=info msg="StartContainer for \"32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda\" returns successfully" May 15 00:30:31.035017 containerd[1443]: time="2025-05-15T00:30:31.034955081Z" level=info msg="shim disconnected" id=32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda namespace=k8s.io May 15 00:30:31.035017 containerd[1443]: time="2025-05-15T00:30:31.035007441Z" level=warning msg="cleaning up after shim disconnected" id=32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda namespace=k8s.io May 15 00:30:31.035017 containerd[1443]: time="2025-05-15T00:30:31.035017081Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:30:31.497558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32837b4c6fc81f4d92db17797e296782f5499faee6a11b9e126e9baaedb7dbda-rootfs.mount: Deactivated successfully. May 15 00:30:31.824613 kubelet[2443]: E0515 00:30:31.824492 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:31.827516 containerd[1443]: time="2025-05-15T00:30:31.827457269Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:30:31.849568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188825359.mount: Deactivated successfully. May 15 00:30:31.850623 containerd[1443]: time="2025-05-15T00:30:31.850579616Z" level=info msg="CreateContainer within sandbox \"b0bbee305341b530a1028381051d5c9644c2680a62ed199e3ec2d5f2ca5c538e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4\"" May 15 00:30:31.851258 containerd[1443]: time="2025-05-15T00:30:31.851210214Z" level=info msg="StartContainer for \"9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4\"" May 15 00:30:31.880437 systemd[1]: Started cri-containerd-9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4.scope - libcontainer container 9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4. May 15 00:30:31.905025 containerd[1443]: time="2025-05-15T00:30:31.904887999Z" level=info msg="StartContainer for \"9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4\" returns successfully" May 15 00:30:32.176255 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 00:30:32.840973 kubelet[2443]: E0515 00:30:32.840573 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:33.842889 kubelet[2443]: E0515 00:30:33.842856 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:35.060877 systemd-networkd[1377]: lxc_health: Link UP May 15 00:30:35.067698 systemd-networkd[1377]: lxc_health: Gained carrier May 15 00:30:35.644263 kubelet[2443]: E0515 00:30:35.644201 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:35.663515 kubelet[2443]: I0515 00:30:35.663440 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8nfw7" podStartSLOduration=8.66342409 podStartE2EDuration="8.66342409s" podCreationTimestamp="2025-05-15 00:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:30:32.859388666 +0000 UTC m=+80.364295692" watchObservedRunningTime="2025-05-15 00:30:35.66342409 +0000 UTC m=+83.168331116" May 15 00:30:35.846009 kubelet[2443]: E0515 00:30:35.845967 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:36.090924 systemd[1]: run-containerd-runc-k8s.io-9a47d5bf1926f27ca59aadd5271b7ac5754cfc118e4456b226c54157a55827f4-runc.0vH5VL.mount: Deactivated successfully. May 15 00:30:36.538433 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 15 00:30:36.607397 kubelet[2443]: E0515 00:30:36.607368 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:36.847941 kubelet[2443]: E0515 00:30:36.847598 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:30:40.363295 sshd[4263]: pam_unix(sshd:session): session closed for user core May 15 00:30:40.374011 systemd-logind[1417]: Session 25 logged out. Waiting for processes to exit. May 15 00:30:40.375091 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:50268.service: Deactivated successfully. May 15 00:30:40.379004 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:30:40.380235 systemd-logind[1417]: Removed session 25.