May 15 09:43:13.904499 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 09:43:13.904519 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 08:06:05 -00 2025 May 15 09:43:13.904529 kernel: KASLR enabled May 15 09:43:13.904534 kernel: efi: EFI v2.7 by EDK II May 15 09:43:13.904540 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 15 09:43:13.904545 kernel: random: crng init done May 15 09:43:13.904552 kernel: secureboot: Secure boot disabled May 15 09:43:13.904558 kernel: ACPI: Early table checksum verification disabled May 15 09:43:13.904564 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 09:43:13.904594 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 09:43:13.904601 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904607 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904613 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904619 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904626 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904634 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904640 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904647 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904653 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:43:13.904659 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 09:43:13.904665 kernel: NUMA: Failed to initialise from firmware May 15 09:43:13.904671 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:43:13.904677 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 09:43:13.904683 kernel: Zone ranges: May 15 09:43:13.904689 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:43:13.904697 kernel: DMA32 empty May 15 09:43:13.904702 kernel: Normal empty May 15 09:43:13.904708 kernel: Movable zone start for each node May 15 09:43:13.904715 kernel: Early memory node ranges May 15 09:43:13.904721 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 09:43:13.904727 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 09:43:13.904733 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 09:43:13.904739 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 09:43:13.904745 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 09:43:13.904751 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 09:43:13.904757 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 09:43:13.904763 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:43:13.904777 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 09:43:13.904784 kernel: psci: probing for conduit method from ACPI. May 15 09:43:13.904790 kernel: psci: PSCIv1.1 detected in firmware. May 15 09:43:13.904800 kernel: psci: Using standard PSCI v0.2 function IDs May 15 09:43:13.904806 kernel: psci: Trusted OS migration not required May 15 09:43:13.904813 kernel: psci: SMC Calling Convention v1.1 May 15 09:43:13.904821 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 09:43:13.904827 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 09:43:13.904834 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 09:43:13.904841 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 09:43:13.904847 kernel: Detected PIPT I-cache on CPU0 May 15 09:43:13.904854 kernel: CPU features: detected: GIC system register CPU interface May 15 09:43:13.904861 kernel: CPU features: detected: Hardware dirty bit management May 15 09:43:13.904867 kernel: CPU features: detected: Spectre-v4 May 15 09:43:13.904874 kernel: CPU features: detected: Spectre-BHB May 15 09:43:13.904880 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 09:43:13.904888 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 09:43:13.904894 kernel: CPU features: detected: ARM erratum 1418040 May 15 09:43:13.904901 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 09:43:13.904907 kernel: alternatives: applying boot alternatives May 15 09:43:13.904915 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:43:13.904922 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 09:43:13.904929 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 09:43:13.904935 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 09:43:13.904942 kernel: Fallback order for Node 0: 0 May 15 09:43:13.904949 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 09:43:13.904955 kernel: Policy zone: DMA May 15 09:43:13.904963 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 09:43:13.904969 kernel: software IO TLB: area num 4. May 15 09:43:13.904976 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 09:43:13.904983 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 15 09:43:13.904989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 09:43:13.904996 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 09:43:13.905003 kernel: rcu: RCU event tracing is enabled. May 15 09:43:13.905010 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 09:43:13.905017 kernel: Trampoline variant of Tasks RCU enabled. May 15 09:43:13.905023 kernel: Tracing variant of Tasks RCU enabled. May 15 09:43:13.905030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 09:43:13.905036 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 09:43:13.905044 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 09:43:13.905051 kernel: GICv3: 256 SPIs implemented May 15 09:43:13.905057 kernel: GICv3: 0 Extended SPIs implemented May 15 09:43:13.905064 kernel: Root IRQ handler: gic_handle_irq May 15 09:43:13.905070 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 09:43:13.905077 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 09:43:13.905083 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 09:43:13.905090 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 09:43:13.905097 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 09:43:13.905104 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 09:43:13.905110 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 09:43:13.905118 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 09:43:13.905125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:43:13.905131 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 09:43:13.905138 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 09:43:13.905145 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 09:43:13.905151 kernel: arm-pv: using stolen time PV May 15 09:43:13.905158 kernel: Console: colour dummy device 80x25 May 15 09:43:13.905165 kernel: ACPI: Core revision 20230628 May 15 09:43:13.905172 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 09:43:13.905179 kernel: pid_max: default: 32768 minimum: 301 May 15 09:43:13.905186 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 09:43:13.905193 kernel: landlock: Up and running. May 15 09:43:13.905200 kernel: SELinux: Initializing. May 15 09:43:13.905206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:43:13.905213 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:43:13.905220 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 09:43:13.905227 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:43:13.905234 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:43:13.905241 kernel: rcu: Hierarchical SRCU implementation. May 15 09:43:13.905249 kernel: rcu: Max phase no-delay instances is 400. May 15 09:43:13.905256 kernel: Platform MSI: ITS@0x8080000 domain created May 15 09:43:13.905262 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 09:43:13.905269 kernel: Remapping and enabling EFI services. May 15 09:43:13.905276 kernel: smp: Bringing up secondary CPUs ... May 15 09:43:13.905282 kernel: Detected PIPT I-cache on CPU1 May 15 09:43:13.905289 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 09:43:13.905296 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 09:43:13.905303 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:43:13.905310 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 09:43:13.905318 kernel: Detected PIPT I-cache on CPU2 May 15 09:43:13.905325 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 09:43:13.905336 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 09:43:13.905344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:43:13.905351 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 09:43:13.905358 kernel: Detected PIPT I-cache on CPU3 May 15 09:43:13.905365 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 09:43:13.905372 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 09:43:13.905379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:43:13.905386 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 09:43:13.905395 kernel: smp: Brought up 1 node, 4 CPUs May 15 09:43:13.905402 kernel: SMP: Total of 4 processors activated. May 15 09:43:13.905412 kernel: CPU features: detected: 32-bit EL0 Support May 15 09:43:13.905419 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 09:43:13.905426 kernel: CPU features: detected: Common not Private translations May 15 09:43:13.905434 kernel: CPU features: detected: CRC32 instructions May 15 09:43:13.905441 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 09:43:13.905449 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 09:43:13.905456 kernel: CPU features: detected: LSE atomic instructions May 15 09:43:13.905463 kernel: CPU features: detected: Privileged Access Never May 15 09:43:13.905470 kernel: CPU features: detected: RAS Extension Support May 15 09:43:13.905478 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 09:43:13.905485 kernel: CPU: All CPU(s) started at EL1 May 15 09:43:13.905492 kernel: alternatives: applying system-wide alternatives May 15 09:43:13.905499 kernel: devtmpfs: initialized May 15 09:43:13.905506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 09:43:13.905515 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 09:43:13.905522 kernel: pinctrl core: initialized pinctrl subsystem May 15 09:43:13.905529 kernel: SMBIOS 3.0.0 present. May 15 09:43:13.905536 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 09:43:13.905543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 09:43:13.905550 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 09:43:13.905557 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 09:43:13.905564 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 09:43:13.905591 kernel: audit: initializing netlink subsys (disabled) May 15 09:43:13.905600 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 15 09:43:13.905620 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 09:43:13.905628 kernel: cpuidle: using governor menu May 15 09:43:13.905635 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 09:43:13.905642 kernel: ASID allocator initialised with 32768 entries May 15 09:43:13.905649 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 09:43:13.905656 kernel: Serial: AMBA PL011 UART driver May 15 09:43:13.905663 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 09:43:13.905670 kernel: Modules: 0 pages in range for non-PLT usage May 15 09:43:13.905679 kernel: Modules: 508944 pages in range for PLT usage May 15 09:43:13.905686 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 09:43:13.905693 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 09:43:13.905700 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 09:43:13.905707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 09:43:13.905714 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 09:43:13.905721 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 09:43:13.905729 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 09:43:13.905735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 09:43:13.905744 kernel: ACPI: Added _OSI(Module Device) May 15 09:43:13.905751 kernel: ACPI: Added _OSI(Processor Device) May 15 09:43:13.905758 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 09:43:13.905765 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 09:43:13.905776 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 09:43:13.905784 kernel: ACPI: Interpreter enabled May 15 09:43:13.905791 kernel: ACPI: Using GIC for interrupt routing May 15 09:43:13.905798 kernel: ACPI: MCFG table detected, 1 entries May 15 09:43:13.905805 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 09:43:13.905813 kernel: printk: console [ttyAMA0] enabled May 15 09:43:13.905820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 09:43:13.905951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 09:43:13.906025 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 09:43:13.906088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 09:43:13.906149 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 09:43:13.906211 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 09:43:13.906222 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 09:43:13.906230 kernel: PCI host bridge to bus 0000:00 May 15 09:43:13.906298 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 09:43:13.906355 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 09:43:13.906417 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 09:43:13.906476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 09:43:13.906554 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 09:43:13.906655 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 09:43:13.906721 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 09:43:13.906794 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 09:43:13.906859 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:43:13.906922 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:43:13.906985 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 09:43:13.907049 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 09:43:13.907128 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 09:43:13.907185 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 09:43:13.907241 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 09:43:13.907250 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 09:43:13.907257 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 09:43:13.907265 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 09:43:13.907272 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 09:43:13.907279 kernel: iommu: Default domain type: Translated May 15 09:43:13.907289 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 09:43:13.907296 kernel: efivars: Registered efivars operations May 15 09:43:13.907303 kernel: vgaarb: loaded May 15 09:43:13.907310 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 09:43:13.907317 kernel: VFS: Disk quotas dquot_6.6.0 May 15 09:43:13.907324 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 09:43:13.907331 kernel: pnp: PnP ACPI init May 15 09:43:13.907398 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 09:43:13.907409 kernel: pnp: PnP ACPI: found 1 devices May 15 09:43:13.907416 kernel: NET: Registered PF_INET protocol family May 15 09:43:13.907423 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 09:43:13.907431 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 09:43:13.907438 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 09:43:13.907445 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 09:43:13.907453 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 09:43:13.907460 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 09:43:13.907467 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:43:13.907475 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:43:13.907483 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 09:43:13.907490 kernel: PCI: CLS 0 bytes, default 64 May 15 09:43:13.907497 kernel: kvm [1]: HYP mode not available May 15 09:43:13.907504 kernel: Initialise system trusted keyrings May 15 09:43:13.907511 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 09:43:13.907518 kernel: Key type asymmetric registered May 15 09:43:13.907525 kernel: Asymmetric key parser 'x509' registered May 15 09:43:13.907532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 09:43:13.907541 kernel: io scheduler mq-deadline registered May 15 09:43:13.907548 kernel: io scheduler kyber registered May 15 09:43:13.907555 kernel: io scheduler bfq registered May 15 09:43:13.907562 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 09:43:13.907569 kernel: ACPI: button: Power Button [PWRB] May 15 09:43:13.907599 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 09:43:13.907682 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 09:43:13.907692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 09:43:13.907700 kernel: thunder_xcv, ver 1.0 May 15 09:43:13.907709 kernel: thunder_bgx, ver 1.0 May 15 09:43:13.907716 kernel: nicpf, ver 1.0 May 15 09:43:13.907723 kernel: nicvf, ver 1.0 May 15 09:43:13.907807 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 09:43:13.907874 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T09:43:13 UTC (1747302193) May 15 09:43:13.907884 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 09:43:13.907892 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 09:43:13.907899 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 09:43:13.907909 kernel: watchdog: Hard watchdog permanently disabled May 15 09:43:13.907916 kernel: NET: Registered PF_INET6 protocol family May 15 09:43:13.907924 kernel: Segment Routing with IPv6 May 15 09:43:13.907933 kernel: In-situ OAM (IOAM) with IPv6 May 15 09:43:13.907940 kernel: NET: Registered PF_PACKET protocol family May 15 09:43:13.907947 kernel: Key type dns_resolver registered May 15 09:43:13.907961 kernel: registered taskstats version 1 May 15 09:43:13.907971 kernel: Loading compiled-in X.509 certificates May 15 09:43:13.907979 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 92c83259b69f308571254e31c325f6266f61f369' May 15 09:43:13.907988 kernel: Key type .fscrypt registered May 15 09:43:13.907995 kernel: Key type fscrypt-provisioning registered May 15 09:43:13.908003 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 09:43:13.908010 kernel: ima: Allocated hash algorithm: sha1 May 15 09:43:13.908018 kernel: ima: No architecture policies found May 15 09:43:13.908026 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 09:43:13.908034 kernel: clk: Disabling unused clocks May 15 09:43:13.908041 kernel: Freeing unused kernel memory: 39744K May 15 09:43:13.908048 kernel: Run /init as init process May 15 09:43:13.908058 kernel: with arguments: May 15 09:43:13.908068 kernel: /init May 15 09:43:13.908082 kernel: with environment: May 15 09:43:13.908089 kernel: HOME=/ May 15 09:43:13.908096 kernel: TERM=linux May 15 09:43:13.908103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 09:43:13.908112 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:43:13.908121 systemd[1]: Detected virtualization kvm. May 15 09:43:13.908130 systemd[1]: Detected architecture arm64. May 15 09:43:13.908137 systemd[1]: Running in initrd. May 15 09:43:13.908144 systemd[1]: No hostname configured, using default hostname. May 15 09:43:13.908151 systemd[1]: Hostname set to . May 15 09:43:13.908159 systemd[1]: Initializing machine ID from VM UUID. May 15 09:43:13.908167 systemd[1]: Queued start job for default target initrd.target. May 15 09:43:13.908174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:43:13.908182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:43:13.908192 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 09:43:13.908199 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:43:13.908207 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 09:43:13.908215 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 09:43:13.908225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 09:43:13.908233 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 09:43:13.908242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:43:13.908250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:43:13.908258 systemd[1]: Reached target paths.target - Path Units. May 15 09:43:13.908266 systemd[1]: Reached target slices.target - Slice Units. May 15 09:43:13.908273 systemd[1]: Reached target swap.target - Swaps. May 15 09:43:13.908281 systemd[1]: Reached target timers.target - Timer Units. May 15 09:43:13.908288 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:43:13.908296 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:43:13.908304 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 09:43:13.908313 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 09:43:13.908321 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:43:13.908329 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:43:13.908336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:43:13.908344 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:43:13.908351 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 09:43:13.908359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:43:13.908367 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 09:43:13.908374 systemd[1]: Starting systemd-fsck-usr.service... May 15 09:43:13.908383 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:43:13.908391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:43:13.908398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:43:13.908409 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 09:43:13.908417 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:43:13.908424 systemd[1]: Finished systemd-fsck-usr.service. May 15 09:43:13.908434 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 09:43:13.908442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:43:13.908450 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:43:13.908488 systemd-journald[239]: Collecting audit messages is disabled. May 15 09:43:13.908509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 09:43:13.908517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:43:13.908525 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 09:43:13.908534 systemd-journald[239]: Journal started May 15 09:43:13.908553 systemd-journald[239]: Runtime Journal (/run/log/journal/2016bd36924c485886e776d9a307f93a) is 5.9M, max 47.3M, 41.4M free. May 15 09:43:13.895342 systemd-modules-load[240]: Inserted module 'overlay' May 15 09:43:13.912070 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:43:13.912102 kernel: Bridge firewalling registered May 15 09:43:13.912481 systemd-modules-load[240]: Inserted module 'br_netfilter' May 15 09:43:13.914072 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:43:13.915721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:43:13.917867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:43:13.918834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:43:13.927344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:43:13.928471 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:43:13.930227 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:43:13.937716 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 09:43:13.939530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:43:13.949034 dracut-cmdline[276]: dracut-dracut-053 May 15 09:43:13.951495 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:43:13.966021 systemd-resolved[280]: Positive Trust Anchors: May 15 09:43:13.966094 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:43:13.966130 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:43:13.970991 systemd-resolved[280]: Defaulting to hostname 'linux'. May 15 09:43:13.971976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:43:13.973614 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:43:14.019604 kernel: SCSI subsystem initialized May 15 09:43:14.023589 kernel: Loading iSCSI transport class v2.0-870. May 15 09:43:14.031597 kernel: iscsi: registered transport (tcp) May 15 09:43:14.045735 kernel: iscsi: registered transport (qla4xxx) May 15 09:43:14.045750 kernel: QLogic iSCSI HBA Driver May 15 09:43:14.089210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 09:43:14.103714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 09:43:14.118919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 09:43:14.118976 kernel: device-mapper: uevent: version 1.0.3 May 15 09:43:14.121590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 09:43:14.168603 kernel: raid6: neonx8 gen() 15745 MB/s May 15 09:43:14.185595 kernel: raid6: neonx4 gen() 15594 MB/s May 15 09:43:14.202592 kernel: raid6: neonx2 gen() 13193 MB/s May 15 09:43:14.219596 kernel: raid6: neonx1 gen() 10460 MB/s May 15 09:43:14.236597 kernel: raid6: int64x8 gen() 6947 MB/s May 15 09:43:14.253593 kernel: raid6: int64x4 gen() 7335 MB/s May 15 09:43:14.270597 kernel: raid6: int64x2 gen() 6118 MB/s May 15 09:43:14.287593 kernel: raid6: int64x1 gen() 5053 MB/s May 15 09:43:14.287618 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s May 15 09:43:14.304595 kernel: raid6: .... xor() 11925 MB/s, rmw enabled May 15 09:43:14.304607 kernel: raid6: using neon recovery algorithm May 15 09:43:14.309589 kernel: xor: measuring software checksum speed May 15 09:43:14.309606 kernel: 8regs : 19745 MB/sec May 15 09:43:14.310968 kernel: 32regs : 18403 MB/sec May 15 09:43:14.311002 kernel: arm64_neon : 26927 MB/sec May 15 09:43:14.311021 kernel: xor: using function: arm64_neon (26927 MB/sec) May 15 09:43:14.362607 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 09:43:14.373271 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 09:43:14.397746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:43:14.408926 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 09:43:14.412055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:43:14.413692 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 09:43:14.442126 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 15 09:43:14.468225 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:43:14.475734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:43:14.515063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:43:14.527746 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 09:43:14.539385 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 09:43:14.540520 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:43:14.541889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:43:14.544180 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:43:14.551736 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 09:43:14.561213 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 09:43:14.561384 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 09:43:14.561331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:43:14.564301 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 09:43:14.561442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:43:14.566835 kernel: GPT:9289727 != 19775487 May 15 09:43:14.566858 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 09:43:14.566870 kernel: GPT:9289727 != 19775487 May 15 09:43:14.566878 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 09:43:14.564349 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:43:14.569469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:43:14.568513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:43:14.568689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:43:14.570316 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:43:14.581499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:43:14.586019 kernel: BTRFS: device fsid 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (525) May 15 09:43:14.586058 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) May 15 09:43:14.584865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 09:43:14.592902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:43:14.600975 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 09:43:14.611203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 09:43:14.614904 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 09:43:14.615904 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 09:43:14.621516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:43:14.634723 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 09:43:14.636250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:43:14.655588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:43:14.662939 disk-uuid[553]: Primary Header is updated. May 15 09:43:14.662939 disk-uuid[553]: Secondary Entries is updated. May 15 09:43:14.662939 disk-uuid[553]: Secondary Header is updated. May 15 09:43:14.666608 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:43:15.678349 disk-uuid[562]: The operation has completed successfully. May 15 09:43:15.679746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:43:15.701370 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 09:43:15.701468 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 09:43:15.719730 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 09:43:15.723609 sh[573]: Success May 15 09:43:15.738600 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 09:43:15.765814 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 09:43:15.780063 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 09:43:15.781908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 09:43:15.791596 kernel: BTRFS info (device dm-0): first mount of filesystem 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 May 15 09:43:15.791640 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 09:43:15.791650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 09:43:15.792028 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 09:43:15.793074 kernel: BTRFS info (device dm-0): using free space tree May 15 09:43:15.796132 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 09:43:15.797253 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 09:43:15.797989 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 09:43:15.799928 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 09:43:15.809937 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:43:15.809988 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:43:15.810000 kernel: BTRFS info (device vda6): using free space tree May 15 09:43:15.812613 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:43:15.820157 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 09:43:15.821735 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:43:15.830619 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 09:43:15.839804 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 09:43:15.900082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:43:15.914797 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:43:15.940729 systemd-networkd[766]: lo: Link UP May 15 09:43:15.941406 systemd-networkd[766]: lo: Gained carrier May 15 09:43:15.942241 systemd-networkd[766]: Enumeration completed May 15 09:43:15.942702 ignition[672]: Ignition 2.20.0 May 15 09:43:15.942348 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:43:15.942709 ignition[672]: Stage: fetch-offline May 15 09:43:15.942738 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:43:15.942743 ignition[672]: no configs at "/usr/lib/ignition/base.d" May 15 09:43:15.942741 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:43:15.942751 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:15.943589 systemd-networkd[766]: eth0: Link UP May 15 09:43:15.942937 ignition[672]: parsed url from cmdline: "" May 15 09:43:15.943592 systemd-networkd[766]: eth0: Gained carrier May 15 09:43:15.942940 ignition[672]: no config URL provided May 15 09:43:15.943598 systemd[1]: Reached target network.target - Network. May 15 09:43:15.942945 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" May 15 09:43:15.943599 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:43:15.942951 ignition[672]: no config at "/usr/lib/ignition/user.ign" May 15 09:43:15.942976 ignition[672]: op(1): [started] loading QEMU firmware config module May 15 09:43:15.942981 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 09:43:15.951857 ignition[672]: op(1): [finished] loading QEMU firmware config module May 15 09:43:15.961615 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:43:15.995867 ignition[672]: parsing config with SHA512: 1b6b8e24296e3af97c5baaefb3ea2f5da87ea2fca9dc35d70bebf2cbfaa7cc492a0dedccef28ed1a4cd1a73771d21e66f2b88b56b4cc373d5da4abdb064aed4a May 15 09:43:16.001115 unknown[672]: fetched base config from "system" May 15 09:43:16.001125 unknown[672]: fetched user config from "qemu" May 15 09:43:16.001587 ignition[672]: fetch-offline: fetch-offline passed May 15 09:43:16.001665 ignition[672]: Ignition finished successfully May 15 09:43:16.004380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:43:16.006266 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 09:43:16.025741 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 09:43:16.037192 ignition[773]: Ignition 2.20.0 May 15 09:43:16.037205 ignition[773]: Stage: kargs May 15 09:43:16.037374 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 15 09:43:16.037382 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:16.038330 ignition[773]: kargs: kargs passed May 15 09:43:16.038376 ignition[773]: Ignition finished successfully May 15 09:43:16.042650 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 09:43:16.052733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 09:43:16.062072 ignition[782]: Ignition 2.20.0 May 15 09:43:16.062082 ignition[782]: Stage: disks May 15 09:43:16.062250 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 15 09:43:16.062260 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:16.063152 ignition[782]: disks: disks passed May 15 09:43:16.063200 ignition[782]: Ignition finished successfully May 15 09:43:16.065920 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 09:43:16.066868 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 09:43:16.068035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 09:43:16.069600 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:43:16.071046 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:43:16.072379 systemd[1]: Reached target basic.target - Basic System. May 15 09:43:16.087754 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 09:43:16.097771 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 09:43:16.105262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 09:43:16.118759 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 09:43:16.158361 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 09:43:16.159597 kernel: EXT4-fs (vda9): mounted filesystem e3ca107a-d829-49e7-81f2-462a85be67d1 r/w with ordered data mode. Quota mode: none. May 15 09:43:16.159531 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 09:43:16.170677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:43:16.172302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 09:43:16.173307 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 09:43:16.173392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 09:43:16.173420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:43:16.180059 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) May 15 09:43:16.180010 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 09:43:16.183152 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:43:16.183168 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:43:16.183178 kernel: BTRFS info (device vda6): using free space tree May 15 09:43:16.182297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 09:43:16.186598 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:43:16.187993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:43:16.232538 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 15 09:43:16.236996 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 15 09:43:16.241124 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 15 09:43:16.244925 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 15 09:43:16.310537 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 09:43:16.319701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 09:43:16.321013 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 09:43:16.326588 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:43:16.342494 ignition[915]: INFO : Ignition 2.20.0 May 15 09:43:16.342494 ignition[915]: INFO : Stage: mount May 15 09:43:16.342494 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:43:16.342494 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:16.347096 ignition[915]: INFO : mount: mount passed May 15 09:43:16.347096 ignition[915]: INFO : Ignition finished successfully May 15 09:43:16.342530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 09:43:16.344903 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 09:43:16.351694 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 09:43:16.790854 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 09:43:16.809784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:43:16.815897 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) May 15 09:43:16.815925 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:43:16.815936 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:43:16.817096 kernel: BTRFS info (device vda6): using free space tree May 15 09:43:16.818586 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:43:16.819950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:43:16.836865 ignition[944]: INFO : Ignition 2.20.0 May 15 09:43:16.836865 ignition[944]: INFO : Stage: files May 15 09:43:16.838100 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:43:16.838100 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:16.838100 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 15 09:43:16.840666 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 09:43:16.840666 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 09:43:16.842951 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 09:43:16.842951 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 09:43:16.842951 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 09:43:16.842110 unknown[944]: wrote ssh authorized keys file for user: core May 15 09:43:16.846582 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 09:43:16.846582 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 09:43:16.987684 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 09:43:17.153523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 09:43:17.153523 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:43:17.156231 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 09:43:17.493729 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 09:43:17.564405 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:43:17.564405 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:43:17.566994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 09:43:17.578906 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 09:43:17.578906 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 09:43:17.578906 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 09:43:17.900055 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 09:43:17.912648 systemd-networkd[766]: eth0: Gained IPv6LL May 15 09:43:18.062124 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 09:43:18.062124 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 09:43:18.064728 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 09:43:18.088444 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:43:18.091742 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:43:18.092880 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 09:43:18.092880 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 09:43:18.092880 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 09:43:18.092880 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 09:43:18.092880 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 09:43:18.092880 ignition[944]: INFO : files: files passed May 15 09:43:18.092880 ignition[944]: INFO : Ignition finished successfully May 15 09:43:18.094972 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 09:43:18.106749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 09:43:18.111742 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 09:43:18.112723 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 09:43:18.112813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 09:43:18.118105 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 15 09:43:18.119999 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:43:18.119999 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 09:43:18.122262 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:43:18.122415 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:43:18.124290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 09:43:18.131702 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 09:43:18.150130 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 09:43:18.150230 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 09:43:18.152033 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 09:43:18.153094 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 09:43:18.154350 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 09:43:18.155050 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 09:43:18.168639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:43:18.179739 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 09:43:18.186827 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 09:43:18.187706 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:43:18.189151 systemd[1]: Stopped target timers.target - Timer Units. May 15 09:43:18.190388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 09:43:18.190524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:43:18.192306 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 09:43:18.193720 systemd[1]: Stopped target basic.target - Basic System. May 15 09:43:18.194956 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 09:43:18.196211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:43:18.197568 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 09:43:18.199010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 09:43:18.200334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:43:18.201727 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 09:43:18.203125 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 09:43:18.204357 systemd[1]: Stopped target swap.target - Swaps. May 15 09:43:18.205447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 09:43:18.205554 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 09:43:18.207246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 09:43:18.208613 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:43:18.210038 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 09:43:18.211431 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:43:18.212386 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 09:43:18.212486 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 09:43:18.214565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 09:43:18.214681 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:43:18.216104 systemd[1]: Stopped target paths.target - Path Units. May 15 09:43:18.217276 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 09:43:18.222632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:43:18.223595 systemd[1]: Stopped target slices.target - Slice Units. May 15 09:43:18.225179 systemd[1]: Stopped target sockets.target - Socket Units. May 15 09:43:18.226301 systemd[1]: iscsid.socket: Deactivated successfully. May 15 09:43:18.226381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:43:18.227476 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 09:43:18.227549 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:43:18.228654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 09:43:18.228752 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:43:18.230119 systemd[1]: ignition-files.service: Deactivated successfully. May 15 09:43:18.230214 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 09:43:18.246835 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 09:43:18.247493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 09:43:18.247623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:43:18.249707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 09:43:18.250966 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 09:43:18.251072 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:43:18.252444 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 09:43:18.252553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:43:18.256906 ignition[999]: INFO : Ignition 2.20.0 May 15 09:43:18.256906 ignition[999]: INFO : Stage: umount May 15 09:43:18.258382 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:43:18.258382 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:43:18.258382 ignition[999]: INFO : umount: umount passed May 15 09:43:18.258382 ignition[999]: INFO : Ignition finished successfully May 15 09:43:18.258558 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 09:43:18.258697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 09:43:18.260295 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 09:43:18.260364 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 09:43:18.262447 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 09:43:18.263232 systemd[1]: Stopped target network.target - Network. May 15 09:43:18.264497 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 09:43:18.264556 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 09:43:18.266010 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 09:43:18.266048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 09:43:18.267355 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 09:43:18.267394 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 09:43:18.268114 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 09:43:18.268149 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 09:43:18.269877 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 09:43:18.271205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 09:43:18.273196 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 09:43:18.273290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 09:43:18.274610 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 09:43:18.274696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 09:43:18.279697 systemd-networkd[766]: eth0: DHCPv6 lease lost May 15 09:43:18.280744 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 09:43:18.280850 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 09:43:18.282974 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 09:43:18.283109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 09:43:18.285104 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 09:43:18.285158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 09:43:18.290675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 09:43:18.291343 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 09:43:18.291402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:43:18.292908 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:43:18.292948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:43:18.294250 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 09:43:18.294289 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 09:43:18.295635 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 09:43:18.295677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:43:18.297370 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:43:18.305659 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 09:43:18.305804 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 09:43:18.307316 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 09:43:18.308360 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:43:18.310643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 09:43:18.310715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 09:43:18.312139 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 09:43:18.312172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:43:18.313478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 09:43:18.313526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 09:43:18.315643 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 09:43:18.315691 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 09:43:18.317885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:43:18.317934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:43:18.341847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 09:43:18.342619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 09:43:18.342683 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:43:18.344230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:43:18.344272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:43:18.349302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 09:43:18.349407 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 09:43:18.351051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 09:43:18.353096 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 09:43:18.362695 systemd[1]: Switching root. May 15 09:43:18.390412 systemd-journald[239]: Journal stopped May 15 09:43:19.143014 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 15 09:43:19.143063 kernel: SELinux: policy capability network_peer_controls=1 May 15 09:43:19.143074 kernel: SELinux: policy capability open_perms=1 May 15 09:43:19.143087 kernel: SELinux: policy capability extended_socket_class=1 May 15 09:43:19.143097 kernel: SELinux: policy capability always_check_network=0 May 15 09:43:19.143106 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 09:43:19.143115 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 09:43:19.143128 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 09:43:19.143137 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 09:43:19.143146 kernel: audit: type=1403 audit(1747302198.622:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 09:43:19.143157 systemd[1]: Successfully loaded SELinux policy in 32.525ms. May 15 09:43:19.143176 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.031ms. May 15 09:43:19.143187 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:43:19.143198 systemd[1]: Detected virtualization kvm. May 15 09:43:19.143211 systemd[1]: Detected architecture arm64. May 15 09:43:19.143222 systemd[1]: Detected first boot. May 15 09:43:19.143233 systemd[1]: Initializing machine ID from VM UUID. May 15 09:43:19.143244 zram_generator::config[1045]: No configuration found. May 15 09:43:19.143255 systemd[1]: Populated /etc with preset unit settings. May 15 09:43:19.143265 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 09:43:19.143275 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 09:43:19.143285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 09:43:19.143296 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 09:43:19.143307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 09:43:19.143319 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 09:43:19.143330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 09:43:19.143340 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 09:43:19.143352 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 09:43:19.143362 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 09:43:19.143373 systemd[1]: Created slice user.slice - User and Session Slice. May 15 09:43:19.143383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:43:19.143393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:43:19.143404 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 09:43:19.143416 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 09:43:19.143427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 09:43:19.143437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:43:19.143447 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 09:43:19.143459 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:43:19.143469 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 09:43:19.143480 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 09:43:19.143491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 09:43:19.143503 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 09:43:19.143513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:43:19.143523 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:43:19.143533 systemd[1]: Reached target slices.target - Slice Units. May 15 09:43:19.143543 systemd[1]: Reached target swap.target - Swaps. May 15 09:43:19.143553 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 09:43:19.143564 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 09:43:19.143584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:43:19.143596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:43:19.143608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:43:19.143618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 09:43:19.143628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 09:43:19.143638 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 09:43:19.143648 systemd[1]: Mounting media.mount - External Media Directory... May 15 09:43:19.143658 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 09:43:19.143669 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 09:43:19.143679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 09:43:19.143691 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 09:43:19.143701 systemd[1]: Reached target machines.target - Containers. May 15 09:43:19.143711 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 09:43:19.143722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:43:19.143732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:43:19.143742 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 09:43:19.143753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:43:19.143770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:43:19.143781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:43:19.143793 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 09:43:19.143804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:43:19.143815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 09:43:19.143825 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 09:43:19.143836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 09:43:19.143846 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 09:43:19.143855 kernel: fuse: init (API version 7.39) May 15 09:43:19.143865 systemd[1]: Stopped systemd-fsck-usr.service. May 15 09:43:19.143876 kernel: loop: module loaded May 15 09:43:19.143886 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:43:19.143896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:43:19.143907 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 09:43:19.143917 kernel: ACPI: bus type drm_connector registered May 15 09:43:19.143927 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 09:43:19.143937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:43:19.143947 systemd[1]: verity-setup.service: Deactivated successfully. May 15 09:43:19.143957 systemd[1]: Stopped verity-setup.service. May 15 09:43:19.143967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 09:43:19.143979 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 09:43:19.143989 systemd[1]: Mounted media.mount - External Media Directory. May 15 09:43:19.144000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 09:43:19.144010 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 09:43:19.144037 systemd-journald[1105]: Collecting audit messages is disabled. May 15 09:43:19.144059 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 09:43:19.144070 systemd-journald[1105]: Journal started May 15 09:43:19.144090 systemd-journald[1105]: Runtime Journal (/run/log/journal/2016bd36924c485886e776d9a307f93a) is 5.9M, max 47.3M, 41.4M free. May 15 09:43:18.974988 systemd[1]: Queued start job for default target multi-user.target. May 15 09:43:18.989453 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 09:43:18.989802 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 09:43:19.146936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:43:19.148614 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:43:19.149047 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 09:43:19.149218 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 09:43:19.150318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:43:19.151133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:43:19.152223 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:43:19.152349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:43:19.153352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:43:19.153474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:43:19.154774 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 09:43:19.154903 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 09:43:19.156065 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:43:19.156195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:43:19.157210 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:43:19.158380 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 09:43:19.159650 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 09:43:19.169476 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 09:43:19.171557 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 09:43:19.185647 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 09:43:19.187347 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 09:43:19.188170 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 09:43:19.188206 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:43:19.189832 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 09:43:19.191671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 09:43:19.193397 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 09:43:19.194262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:43:19.195623 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 09:43:19.197152 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 09:43:19.197987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:43:19.198798 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 09:43:19.199637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:43:19.202730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:43:19.208172 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 09:43:19.211902 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 09:43:19.214637 systemd-journald[1105]: Time spent on flushing to /var/log/journal/2016bd36924c485886e776d9a307f93a is 22.614ms for 860 entries. May 15 09:43:19.214637 systemd-journald[1105]: System Journal (/var/log/journal/2016bd36924c485886e776d9a307f93a) is 8.0M, max 195.6M, 187.6M free. May 15 09:43:19.245831 systemd-journald[1105]: Received client request to flush runtime journal. May 15 09:43:19.245887 kernel: loop0: detected capacity change from 0 to 116808 May 15 09:43:19.216859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:43:19.218468 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 09:43:19.219767 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 09:43:19.222665 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 09:43:19.224766 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 09:43:19.228333 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 09:43:19.235713 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 09:43:19.239861 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 09:43:19.247352 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 09:43:19.248611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:43:19.250690 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 09:43:19.266014 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 09:43:19.266203 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 09:43:19.274723 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:43:19.276216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 09:43:19.276734 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 09:43:19.281901 kernel: loop1: detected capacity change from 0 to 113536 May 15 09:43:19.295769 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 15 09:43:19.295786 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 15 09:43:19.300650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:43:19.327071 kernel: loop2: detected capacity change from 0 to 189592 May 15 09:43:19.373607 kernel: loop3: detected capacity change from 0 to 116808 May 15 09:43:19.378603 kernel: loop4: detected capacity change from 0 to 113536 May 15 09:43:19.382590 kernel: loop5: detected capacity change from 0 to 189592 May 15 09:43:19.386220 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 09:43:19.386655 (sd-merge)[1182]: Merged extensions into '/usr'. May 15 09:43:19.389769 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 15 09:43:19.389782 systemd[1]: Reloading... May 15 09:43:19.437672 zram_generator::config[1205]: No configuration found. May 15 09:43:19.462625 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 09:43:19.532120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:43:19.566599 systemd[1]: Reloading finished in 176 ms. May 15 09:43:19.596616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 09:43:19.597709 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 09:43:19.615746 systemd[1]: Starting ensure-sysext.service... May 15 09:43:19.617466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:43:19.627242 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... May 15 09:43:19.627255 systemd[1]: Reloading... May 15 09:43:19.639563 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 09:43:19.639857 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 09:43:19.640479 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 09:43:19.640734 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 15 09:43:19.640799 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 15 09:43:19.642879 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:43:19.642892 systemd-tmpfiles[1243]: Skipping /boot May 15 09:43:19.649553 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:43:19.649720 systemd-tmpfiles[1243]: Skipping /boot May 15 09:43:19.672630 zram_generator::config[1270]: No configuration found. May 15 09:43:19.757074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:43:19.792320 systemd[1]: Reloading finished in 164 ms. May 15 09:43:19.808731 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 09:43:19.822076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:43:19.829461 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:43:19.831742 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 09:43:19.833668 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 09:43:19.837870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:43:19.841911 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:43:19.846174 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 09:43:19.850429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:43:19.852205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:43:19.855011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:43:19.857186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:43:19.859092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:43:19.862741 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 09:43:19.864943 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 09:43:19.866793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:43:19.867306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:43:19.868702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:43:19.868834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:43:19.875718 systemd-udevd[1312]: Using default interface naming scheme 'v255'. May 15 09:43:19.876169 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:43:19.876314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:43:19.884503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:43:19.894315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:43:19.897154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:43:19.900807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:43:19.904221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:43:19.906879 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 09:43:19.908660 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 09:43:19.910393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:43:19.910523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:43:19.914810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:43:19.914952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:43:19.916412 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 09:43:19.918088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:43:19.919988 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:43:19.920637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:43:19.923082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 09:43:19.928977 augenrules[1352]: No rules May 15 09:43:19.930300 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:43:19.931328 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:43:19.946494 systemd[1]: Finished ensure-sysext.service. May 15 09:43:19.954412 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 09:43:19.966974 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:43:19.967790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:43:19.970781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:43:19.974786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:43:19.978734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:43:19.981478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:43:19.983371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:43:19.985817 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:43:19.991763 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 09:43:19.992555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 09:43:19.993066 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 09:43:19.994141 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:43:19.994273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:43:19.995355 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:43:19.996170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:43:20.009945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:43:20.010099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:43:20.011120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:43:20.019511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:43:20.019709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:43:20.021335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:43:20.027384 augenrules[1380]: /sbin/augenrules: No change May 15 09:43:20.027207 systemd-resolved[1309]: Positive Trust Anchors: May 15 09:43:20.027277 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:43:20.027313 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:43:20.037031 systemd-resolved[1309]: Defaulting to hostname 'linux'. May 15 09:43:20.040717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:43:20.042787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1370) May 15 09:43:20.043723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:43:20.047075 augenrules[1413]: No rules May 15 09:43:20.048195 systemd-networkd[1387]: lo: Link UP May 15 09:43:20.048198 systemd-networkd[1387]: lo: Gained carrier May 15 09:43:20.048748 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:43:20.049510 systemd-networkd[1387]: Enumeration completed May 15 09:43:20.049746 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:43:20.050914 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:43:20.052050 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:43:20.052054 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:43:20.053123 systemd-networkd[1387]: eth0: Link UP May 15 09:43:20.053187 systemd-networkd[1387]: eth0: Gained carrier May 15 09:43:20.053236 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:43:20.053931 systemd[1]: Reached target network.target - Network. May 15 09:43:20.060827 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 09:43:20.067902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:43:20.071456 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 09:43:20.075778 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:43:20.076564 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:43:20.089208 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 09:43:20.091793 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 09:43:20.091841 systemd-timesyncd[1388]: Initial clock synchronization to Thu 2025-05-15 09:43:20.142486 UTC. May 15 09:43:20.093062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 09:43:20.097175 systemd[1]: Reached target time-set.target - System Time Set. May 15 09:43:20.117724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:43:20.121113 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 09:43:20.124436 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 09:43:20.158162 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:43:20.183314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:43:20.191774 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 09:43:20.192860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:43:20.193709 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:43:20.194536 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 09:43:20.195471 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 09:43:20.196567 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 09:43:20.197435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 09:43:20.198379 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 09:43:20.199311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 09:43:20.199345 systemd[1]: Reached target paths.target - Path Units. May 15 09:43:20.200013 systemd[1]: Reached target timers.target - Timer Units. May 15 09:43:20.201779 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 09:43:20.203793 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 09:43:20.211429 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 09:43:20.213356 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 09:43:20.214687 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 09:43:20.215544 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:43:20.216242 systemd[1]: Reached target basic.target - Basic System. May 15 09:43:20.216953 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 09:43:20.216982 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 09:43:20.217846 systemd[1]: Starting containerd.service - containerd container runtime... May 15 09:43:20.219487 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 09:43:20.222629 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:43:20.222491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 09:43:20.224802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 09:43:20.228688 jq[1442]: false May 15 09:43:20.228147 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 09:43:20.229115 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 09:43:20.232732 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 09:43:20.235812 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 09:43:20.239304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 09:43:20.242803 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 09:43:20.243063 extend-filesystems[1443]: Found loop3 May 15 09:43:20.244657 extend-filesystems[1443]: Found loop4 May 15 09:43:20.244657 extend-filesystems[1443]: Found loop5 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda May 15 09:43:20.244657 extend-filesystems[1443]: Found vda1 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda2 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda3 May 15 09:43:20.244657 extend-filesystems[1443]: Found usr May 15 09:43:20.244657 extend-filesystems[1443]: Found vda4 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda6 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda7 May 15 09:43:20.244657 extend-filesystems[1443]: Found vda9 May 15 09:43:20.244657 extend-filesystems[1443]: Checking size of /dev/vda9 May 15 09:43:20.247894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 09:43:20.248316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 09:43:20.251770 systemd[1]: Starting update-engine.service - Update Engine... May 15 09:43:20.253491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 09:43:20.255861 dbus-daemon[1441]: [system] SELinux support is enabled May 15 09:43:20.257644 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 09:43:20.257913 extend-filesystems[1443]: Resized partition /dev/vda9 May 15 09:43:20.258719 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 09:43:20.260617 jq[1459]: true May 15 09:43:20.264792 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) May 15 09:43:20.265079 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 09:43:20.265243 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 09:43:20.265489 systemd[1]: motdgen.service: Deactivated successfully. May 15 09:43:20.265649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 09:43:20.268596 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 09:43:20.285641 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1351) May 15 09:43:20.287141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 09:43:20.287730 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 09:43:20.287314 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 09:43:20.302432 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 09:43:20.303413 jq[1468]: true May 15 09:43:20.309201 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 09:43:20.309201 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 09:43:20.309201 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 09:43:20.323901 extend-filesystems[1443]: Resized filesystem in /dev/vda9 May 15 09:43:20.309965 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 09:43:20.326720 update_engine[1457]: I20250515 09:43:20.324683 1457 main.cc:92] Flatcar Update Engine starting May 15 09:43:20.310141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 09:43:20.327652 update_engine[1457]: I20250515 09:43:20.327318 1457 update_check_scheduler.cc:74] Next update check in 4m3s May 15 09:43:20.317987 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button) May 15 09:43:20.321696 systemd-logind[1451]: New seat seat0. May 15 09:43:20.325808 systemd[1]: Started systemd-logind.service - User Login Management. May 15 09:43:20.328528 systemd[1]: Started update-engine.service - Update Engine. May 15 09:43:20.330019 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 09:43:20.330157 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 09:43:20.331868 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 09:43:20.331971 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 09:43:20.335634 tar[1466]: linux-arm64/helm May 15 09:43:20.337833 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 09:43:20.358060 bash[1498]: Updated "/home/core/.ssh/authorized_keys" May 15 09:43:20.363949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 09:43:20.365542 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 09:43:20.393468 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 09:43:20.507186 containerd[1470]: time="2025-05-15T09:43:20.507107720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 09:43:20.550724 containerd[1470]: time="2025-05-15T09:43:20.550657640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.552131 containerd[1470]: time="2025-05-15T09:43:20.552096280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552191120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552214960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552383040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552400760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552454800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552466720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552638760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552654560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552667800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552677040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552759720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553249 containerd[1470]: time="2025-05-15T09:43:20.552963360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 09:43:20.553474 containerd[1470]: time="2025-05-15T09:43:20.553055240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:43:20.553474 containerd[1470]: time="2025-05-15T09:43:20.553069360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 09:43:20.553474 containerd[1470]: time="2025-05-15T09:43:20.553136960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 09:43:20.553474 containerd[1470]: time="2025-05-15T09:43:20.553176360Z" level=info msg="metadata content store policy set" policy=shared May 15 09:43:20.567340 containerd[1470]: time="2025-05-15T09:43:20.567300520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 09:43:20.567567 containerd[1470]: time="2025-05-15T09:43:20.567541800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 09:43:20.567663 containerd[1470]: time="2025-05-15T09:43:20.567649000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 09:43:20.567724 containerd[1470]: time="2025-05-15T09:43:20.567712400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 09:43:20.567825 containerd[1470]: time="2025-05-15T09:43:20.567809880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 09:43:20.568174 containerd[1470]: time="2025-05-15T09:43:20.568154280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 09:43:20.568532 containerd[1470]: time="2025-05-15T09:43:20.568502400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 09:43:20.568679 containerd[1470]: time="2025-05-15T09:43:20.568657520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 09:43:20.568713 containerd[1470]: time="2025-05-15T09:43:20.568680560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 09:43:20.568713 containerd[1470]: time="2025-05-15T09:43:20.568697400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 09:43:20.568768 containerd[1470]: time="2025-05-15T09:43:20.568712400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568768 containerd[1470]: time="2025-05-15T09:43:20.568726680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568768 containerd[1470]: time="2025-05-15T09:43:20.568739360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568768 containerd[1470]: time="2025-05-15T09:43:20.568761680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568778280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568791480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568803960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568815200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568838760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568852 containerd[1470]: time="2025-05-15T09:43:20.568852240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568865160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568878160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568890680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568903920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568915360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568927480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 09:43:20.568951 containerd[1470]: time="2025-05-15T09:43:20.568940400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.568954960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.568966920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.568979520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.568992000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.569007480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.569027960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.569041240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 09:43:20.569072 containerd[1470]: time="2025-05-15T09:43:20.569060640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.569992640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570029560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570041720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570053520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570063240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570082400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570092720Z" level=info msg="NRI interface is disabled by configuration." May 15 09:43:20.570123 containerd[1470]: time="2025-05-15T09:43:20.570102880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 09:43:20.570556 containerd[1470]: time="2025-05-15T09:43:20.570485880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 09:43:20.570556 containerd[1470]: time="2025-05-15T09:43:20.570537800Z" level=info msg="Connect containerd service" May 15 09:43:20.570800 containerd[1470]: time="2025-05-15T09:43:20.570596280Z" level=info msg="using legacy CRI server" May 15 09:43:20.570800 containerd[1470]: time="2025-05-15T09:43:20.570605080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 09:43:20.570995 containerd[1470]: time="2025-05-15T09:43:20.570976280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 09:43:20.571766 containerd[1470]: time="2025-05-15T09:43:20.571726880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:43:20.572161 containerd[1470]: time="2025-05-15T09:43:20.572079800Z" level=info msg="Start subscribing containerd event" May 15 09:43:20.572312 containerd[1470]: time="2025-05-15T09:43:20.572232080Z" level=info msg="Start recovering state" May 15 09:43:20.572312 containerd[1470]: time="2025-05-15T09:43:20.572272360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 09:43:20.572312 containerd[1470]: time="2025-05-15T09:43:20.572310440Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 09:43:20.572995 containerd[1470]: time="2025-05-15T09:43:20.572672720Z" level=info msg="Start event monitor" May 15 09:43:20.572995 containerd[1470]: time="2025-05-15T09:43:20.572695360Z" level=info msg="Start snapshots syncer" May 15 09:43:20.572995 containerd[1470]: time="2025-05-15T09:43:20.572705280Z" level=info msg="Start cni network conf syncer for default" May 15 09:43:20.572995 containerd[1470]: time="2025-05-15T09:43:20.572713320Z" level=info msg="Start streaming server" May 15 09:43:20.572995 containerd[1470]: time="2025-05-15T09:43:20.572862800Z" level=info msg="containerd successfully booted in 0.066563s" May 15 09:43:20.572958 systemd[1]: Started containerd.service - containerd container runtime. May 15 09:43:20.658693 tar[1466]: linux-arm64/LICENSE May 15 09:43:20.659206 tar[1466]: linux-arm64/README.md May 15 09:43:20.671682 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 09:43:20.908068 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 09:43:20.928627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 09:43:20.937851 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 09:43:20.943367 systemd[1]: issuegen.service: Deactivated successfully. May 15 09:43:20.944627 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 09:43:20.947022 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 09:43:20.958630 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 09:43:20.967893 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 09:43:20.969912 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 09:43:20.970907 systemd[1]: Reached target getty.target - Login Prompts. May 15 09:43:21.742895 systemd-networkd[1387]: eth0: Gained IPv6LL May 15 09:43:21.747653 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 09:43:21.749189 systemd[1]: Reached target network-online.target - Network is Online. May 15 09:43:21.758789 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 09:43:21.760985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:21.762844 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 09:43:21.777200 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 09:43:21.777372 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 09:43:21.780203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 09:43:21.782312 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 09:43:22.246979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:22.248179 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 09:43:22.249051 systemd[1]: Startup finished in 527ms (kernel) + 4.926s (initrd) + 3.659s (userspace) = 9.113s. May 15 09:43:22.250458 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:43:22.664205 kubelet[1556]: E0515 09:43:22.664154 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:43:22.666618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:43:22.666773 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:43:26.723328 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 09:43:26.724407 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:59126.service - OpenSSH per-connection server daemon (10.0.0.1:59126). May 15 09:43:26.783249 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 59126 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:26.785360 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:26.796069 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 09:43:26.808954 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 09:43:26.811131 systemd-logind[1451]: New session 1 of user core. May 15 09:43:26.819368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 09:43:26.821871 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 09:43:26.829985 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 09:43:26.899514 systemd[1573]: Queued start job for default target default.target. May 15 09:43:26.910480 systemd[1573]: Created slice app.slice - User Application Slice. May 15 09:43:26.910509 systemd[1573]: Reached target paths.target - Paths. May 15 09:43:26.910521 systemd[1573]: Reached target timers.target - Timers. May 15 09:43:26.911745 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 09:43:26.921615 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 09:43:26.921681 systemd[1573]: Reached target sockets.target - Sockets. May 15 09:43:26.921693 systemd[1573]: Reached target basic.target - Basic System. May 15 09:43:26.921730 systemd[1573]: Reached target default.target - Main User Target. May 15 09:43:26.921755 systemd[1573]: Startup finished in 86ms. May 15 09:43:26.922057 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 09:43:26.923394 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 09:43:26.983133 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:59140.service - OpenSSH per-connection server daemon (10.0.0.1:59140). May 15 09:43:27.032838 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 59140 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.034117 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.037988 systemd-logind[1451]: New session 2 of user core. May 15 09:43:27.049819 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 09:43:27.101678 sshd[1586]: Connection closed by 10.0.0.1 port 59140 May 15 09:43:27.102012 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 15 09:43:27.111086 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:59140.service: Deactivated successfully. May 15 09:43:27.114135 systemd[1]: session-2.scope: Deactivated successfully. May 15 09:43:27.115547 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. May 15 09:43:27.130947 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:59152.service - OpenSSH per-connection server daemon (10.0.0.1:59152). May 15 09:43:27.131836 systemd-logind[1451]: Removed session 2. May 15 09:43:27.172905 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 59152 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.174096 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.178073 systemd-logind[1451]: New session 3 of user core. May 15 09:43:27.183716 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 09:43:27.231934 sshd[1593]: Connection closed by 10.0.0.1 port 59152 May 15 09:43:27.232408 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 15 09:43:27.245057 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:59152.service: Deactivated successfully. May 15 09:43:27.246642 systemd[1]: session-3.scope: Deactivated successfully. May 15 09:43:27.248268 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. May 15 09:43:27.249465 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:59162.service - OpenSSH per-connection server daemon (10.0.0.1:59162). May 15 09:43:27.250806 systemd-logind[1451]: Removed session 3. May 15 09:43:27.296372 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 59162 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.297713 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.301639 systemd-logind[1451]: New session 4 of user core. May 15 09:43:27.310741 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 09:43:27.362113 sshd[1600]: Connection closed by 10.0.0.1 port 59162 May 15 09:43:27.362469 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 15 09:43:27.371899 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:59162.service: Deactivated successfully. May 15 09:43:27.373831 systemd[1]: session-4.scope: Deactivated successfully. May 15 09:43:27.374994 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. May 15 09:43:27.385827 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:59172.service - OpenSSH per-connection server daemon (10.0.0.1:59172). May 15 09:43:27.386653 systemd-logind[1451]: Removed session 4. May 15 09:43:27.427727 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.429352 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.433413 systemd-logind[1451]: New session 5 of user core. May 15 09:43:27.454742 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 09:43:27.515358 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 09:43:27.515660 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:43:27.532532 sudo[1608]: pam_unix(sudo:session): session closed for user root May 15 09:43:27.534096 sshd[1607]: Connection closed by 10.0.0.1 port 59172 May 15 09:43:27.534611 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 15 09:43:27.546172 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:59172.service: Deactivated successfully. May 15 09:43:27.547546 systemd[1]: session-5.scope: Deactivated successfully. May 15 09:43:27.549868 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. May 15 09:43:27.551318 systemd-logind[1451]: Removed session 5. May 15 09:43:27.557114 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:59178.service - OpenSSH per-connection server daemon (10.0.0.1:59178). May 15 09:43:27.597421 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 59178 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.598749 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.603815 systemd-logind[1451]: New session 6 of user core. May 15 09:43:27.613769 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 09:43:27.665911 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 09:43:27.666476 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:43:27.669757 sudo[1617]: pam_unix(sudo:session): session closed for user root May 15 09:43:27.675206 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 09:43:27.675480 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:43:27.694952 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:43:27.716933 augenrules[1639]: No rules May 15 09:43:27.718200 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:43:27.718375 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:43:27.721070 sudo[1616]: pam_unix(sudo:session): session closed for user root May 15 09:43:27.722222 sshd[1615]: Connection closed by 10.0.0.1 port 59178 May 15 09:43:27.722627 sshd-session[1613]: pam_unix(sshd:session): session closed for user core May 15 09:43:27.732983 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:59178.service: Deactivated successfully. May 15 09:43:27.736052 systemd[1]: session-6.scope: Deactivated successfully. May 15 09:43:27.737279 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. May 15 09:43:27.744907 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). May 15 09:43:27.745867 systemd-logind[1451]: Removed session 6. May 15 09:43:27.785883 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:43:27.787063 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:43:27.790636 systemd-logind[1451]: New session 7 of user core. May 15 09:43:27.801814 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 09:43:27.852614 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 09:43:27.853183 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:43:28.176796 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 09:43:28.177037 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 09:43:28.425580 dockerd[1671]: time="2025-05-15T09:43:28.425485721Z" level=info msg="Starting up" May 15 09:43:28.570659 dockerd[1671]: time="2025-05-15T09:43:28.570518308Z" level=info msg="Loading containers: start." May 15 09:43:28.727942 kernel: Initializing XFRM netlink socket May 15 09:43:28.802384 systemd-networkd[1387]: docker0: Link UP May 15 09:43:28.836010 dockerd[1671]: time="2025-05-15T09:43:28.835886552Z" level=info msg="Loading containers: done." May 15 09:43:28.847902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2821460549-merged.mount: Deactivated successfully. May 15 09:43:28.849773 dockerd[1671]: time="2025-05-15T09:43:28.849733222Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 09:43:28.849844 dockerd[1671]: time="2025-05-15T09:43:28.849826414Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 09:43:28.849939 dockerd[1671]: time="2025-05-15T09:43:28.849920929Z" level=info msg="Daemon has completed initialization" May 15 09:43:28.875493 dockerd[1671]: time="2025-05-15T09:43:28.875391382Z" level=info msg="API listen on /run/docker.sock" May 15 09:43:28.875598 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 09:43:29.506819 containerd[1470]: time="2025-05-15T09:43:29.506764958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 09:43:30.176726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113675875.mount: Deactivated successfully. May 15 09:43:31.527136 containerd[1470]: time="2025-05-15T09:43:31.527077274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:31.527615 containerd[1470]: time="2025-05-15T09:43:31.527587325Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 15 09:43:31.528256 containerd[1470]: time="2025-05-15T09:43:31.528228674Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:31.531264 containerd[1470]: time="2025-05-15T09:43:31.531207440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:31.532416 containerd[1470]: time="2025-05-15T09:43:31.532386686Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.025579481s" May 15 09:43:31.532471 containerd[1470]: time="2025-05-15T09:43:31.532420342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 09:43:31.533217 containerd[1470]: time="2025-05-15T09:43:31.533150760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 09:43:32.862927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 09:43:32.874783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:32.965383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:32.968785 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:43:32.996766 containerd[1470]: time="2025-05-15T09:43:32.996689375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:32.997860 containerd[1470]: time="2025-05-15T09:43:32.997601305Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 15 09:43:32.998589 containerd[1470]: time="2025-05-15T09:43:32.998520566Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:33.002540 containerd[1470]: time="2025-05-15T09:43:33.002507752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:33.004367 containerd[1470]: time="2025-05-15T09:43:33.004338890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.471160207s" May 15 09:43:33.004497 containerd[1470]: time="2025-05-15T09:43:33.004448270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 09:43:33.005049 containerd[1470]: time="2025-05-15T09:43:33.005026969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 09:43:33.010404 kubelet[1934]: E0515 09:43:33.010370 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:43:33.013653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:43:33.013789 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:43:34.399333 containerd[1470]: time="2025-05-15T09:43:34.398583472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:34.399333 containerd[1470]: time="2025-05-15T09:43:34.399318694Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 15 09:43:34.399914 containerd[1470]: time="2025-05-15T09:43:34.399884766Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:34.403278 containerd[1470]: time="2025-05-15T09:43:34.403243720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:34.404493 containerd[1470]: time="2025-05-15T09:43:34.404332937Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.399276292s" May 15 09:43:34.404493 containerd[1470]: time="2025-05-15T09:43:34.404364773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 09:43:34.404964 containerd[1470]: time="2025-05-15T09:43:34.404897969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 09:43:35.427700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079084703.mount: Deactivated successfully. May 15 09:43:35.658542 containerd[1470]: time="2025-05-15T09:43:35.658493356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:35.659351 containerd[1470]: time="2025-05-15T09:43:35.658970622Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 15 09:43:35.659817 containerd[1470]: time="2025-05-15T09:43:35.659787701Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:35.661897 containerd[1470]: time="2025-05-15T09:43:35.661853401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:35.662654 containerd[1470]: time="2025-05-15T09:43:35.662634205Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.257704001s" May 15 09:43:35.662836 containerd[1470]: time="2025-05-15T09:43:35.662729338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 09:43:35.663315 containerd[1470]: time="2025-05-15T09:43:35.663270387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 09:43:36.222483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066521102.mount: Deactivated successfully. May 15 09:43:37.085936 containerd[1470]: time="2025-05-15T09:43:37.085885446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.087962 containerd[1470]: time="2025-05-15T09:43:37.087846515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 09:43:37.089150 containerd[1470]: time="2025-05-15T09:43:37.089051057Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.092152 containerd[1470]: time="2025-05-15T09:43:37.092102782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.093384 containerd[1470]: time="2025-05-15T09:43:37.093353719Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.430045057s" May 15 09:43:37.093460 containerd[1470]: time="2025-05-15T09:43:37.093387264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 09:43:37.093970 containerd[1470]: time="2025-05-15T09:43:37.093811342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 09:43:37.539265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796524800.mount: Deactivated successfully. May 15 09:43:37.542915 containerd[1470]: time="2025-05-15T09:43:37.542869799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.543531 containerd[1470]: time="2025-05-15T09:43:37.543484660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 09:43:37.544221 containerd[1470]: time="2025-05-15T09:43:37.544185785Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.546651 containerd[1470]: time="2025-05-15T09:43:37.546620889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:37.547445 containerd[1470]: time="2025-05-15T09:43:37.547412562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 453.571958ms" May 15 09:43:37.547445 containerd[1470]: time="2025-05-15T09:43:37.547438621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 09:43:37.548044 containerd[1470]: time="2025-05-15T09:43:37.547871785Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 09:43:38.070113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833059033.mount: Deactivated successfully. May 15 09:43:40.775220 containerd[1470]: time="2025-05-15T09:43:40.775145496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:40.775746 containerd[1470]: time="2025-05-15T09:43:40.775698013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 09:43:40.776654 containerd[1470]: time="2025-05-15T09:43:40.776614913Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:40.780495 containerd[1470]: time="2025-05-15T09:43:40.780441994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:43:40.781342 containerd[1470]: time="2025-05-15T09:43:40.781285217Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.23338105s" May 15 09:43:40.781342 containerd[1470]: time="2025-05-15T09:43:40.781319554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 09:43:43.112920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 09:43:43.120855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:43.243767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:43.248443 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:43:43.283131 kubelet[2087]: E0515 09:43:43.283081 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:43:43.285720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:43:43.285880 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:43:45.902940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:45.914928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:45.942540 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit session-7.scope)... May 15 09:43:45.942562 systemd[1]: Reloading... May 15 09:43:46.008655 zram_generator::config[2143]: No configuration found. May 15 09:43:46.240250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:43:46.293519 systemd[1]: Reloading finished in 350 ms. May 15 09:43:46.333289 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 09:43:46.333383 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 09:43:46.334685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:46.336441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:46.433961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:46.438515 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:43:46.479747 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:43:46.479747 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 09:43:46.479747 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:43:46.480084 kubelet[2189]: I0515 09:43:46.479878 2189 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:43:47.252155 kubelet[2189]: I0515 09:43:47.252101 2189 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 09:43:47.252155 kubelet[2189]: I0515 09:43:47.252138 2189 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:43:47.252447 kubelet[2189]: I0515 09:43:47.252418 2189 server.go:929] "Client rotation is on, will bootstrap in background" May 15 09:43:47.282309 kubelet[2189]: E0515 09:43:47.282268 2189 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:47.284486 kubelet[2189]: I0515 09:43:47.284299 2189 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:43:47.296214 kubelet[2189]: E0515 09:43:47.296153 2189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 09:43:47.296214 kubelet[2189]: I0515 09:43:47.296192 2189 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 09:43:47.299575 kubelet[2189]: I0515 09:43:47.299543 2189 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:43:47.300412 kubelet[2189]: I0515 09:43:47.300379 2189 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 09:43:47.300586 kubelet[2189]: I0515 09:43:47.300532 2189 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:43:47.300774 kubelet[2189]: I0515 09:43:47.300563 2189 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 09:43:47.300913 kubelet[2189]: I0515 09:43:47.300899 2189 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:43:47.300913 kubelet[2189]: I0515 09:43:47.300912 2189 container_manager_linux.go:300] "Creating device plugin manager" May 15 09:43:47.301126 kubelet[2189]: I0515 09:43:47.301098 2189 state_mem.go:36] "Initialized new in-memory state store" May 15 09:43:47.302823 kubelet[2189]: I0515 09:43:47.302786 2189 kubelet.go:408] "Attempting to sync node with API server" May 15 09:43:47.302856 kubelet[2189]: I0515 09:43:47.302828 2189 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:43:47.302980 kubelet[2189]: I0515 09:43:47.302960 2189 kubelet.go:314] "Adding apiserver pod source" May 15 09:43:47.302980 kubelet[2189]: I0515 09:43:47.302976 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:43:47.305189 kubelet[2189]: I0515 09:43:47.305157 2189 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:43:47.305592 kubelet[2189]: W0515 09:43:47.305304 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:47.305592 kubelet[2189]: E0515 09:43:47.305373 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:47.305592 kubelet[2189]: W0515 09:43:47.305508 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:47.305592 kubelet[2189]: E0515 09:43:47.305555 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:47.307284 kubelet[2189]: I0515 09:43:47.307254 2189 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:43:47.308115 kubelet[2189]: W0515 09:43:47.308092 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 09:43:47.309008 kubelet[2189]: I0515 09:43:47.308985 2189 server.go:1269] "Started kubelet" May 15 09:43:47.309281 kubelet[2189]: I0515 09:43:47.309222 2189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:43:47.311600 kubelet[2189]: I0515 09:43:47.310935 2189 server.go:460] "Adding debug handlers to kubelet server" May 15 09:43:47.313226 kubelet[2189]: I0515 09:43:47.313092 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:43:47.315841 kubelet[2189]: E0515 09:43:47.315806 2189 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:43:47.316256 kubelet[2189]: I0515 09:43:47.316218 2189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 09:43:47.316362 kubelet[2189]: I0515 09:43:47.316239 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:43:47.316616 kubelet[2189]: I0515 09:43:47.316599 2189 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:43:47.317926 kubelet[2189]: E0515 09:43:47.317886 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:47.318435 kubelet[2189]: I0515 09:43:47.318417 2189 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 09:43:47.318545 kubelet[2189]: E0515 09:43:47.318490 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" May 15 09:43:47.318715 kubelet[2189]: I0515 09:43:47.318700 2189 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 09:43:47.318858 kubelet[2189]: I0515 09:43:47.318844 2189 reconciler.go:26] "Reconciler: start to sync state" May 15 09:43:47.319244 kubelet[2189]: W0515 09:43:47.319201 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:47.319354 kubelet[2189]: E0515 09:43:47.319336 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:47.323161 kubelet[2189]: I0515 09:43:47.323135 2189 factory.go:221] Registration of the systemd container factory successfully May 15 09:43:47.323258 kubelet[2189]: I0515 09:43:47.323235 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:43:47.324590 kubelet[2189]: E0515 09:43:47.323486 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183faa1e3dffdd47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 09:43:47.308961095 +0000 UTC m=+0.867408167,LastTimestamp:2025-05-15 09:43:47.308961095 +0000 UTC m=+0.867408167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 09:43:47.325508 kubelet[2189]: I0515 09:43:47.325483 2189 factory.go:221] Registration of the containerd container factory successfully May 15 09:43:47.330773 kubelet[2189]: I0515 09:43:47.330726 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:43:47.332441 kubelet[2189]: I0515 09:43:47.331760 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:43:47.332441 kubelet[2189]: I0515 09:43:47.331789 2189 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 09:43:47.332441 kubelet[2189]: I0515 09:43:47.331806 2189 kubelet.go:2321] "Starting kubelet main sync loop" May 15 09:43:47.332441 kubelet[2189]: E0515 09:43:47.331875 2189 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:43:47.332441 kubelet[2189]: W0515 09:43:47.332350 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:47.332441 kubelet[2189]: E0515 09:43:47.332382 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:47.337547 kubelet[2189]: I0515 09:43:47.337522 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 09:43:47.337547 kubelet[2189]: I0515 09:43:47.337542 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 09:43:47.337679 kubelet[2189]: I0515 09:43:47.337582 2189 state_mem.go:36] "Initialized new in-memory state store" May 15 09:43:47.400065 kubelet[2189]: I0515 09:43:47.400039 2189 policy_none.go:49] "None policy: Start" May 15 09:43:47.401251 kubelet[2189]: I0515 09:43:47.400905 2189 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 09:43:47.401251 kubelet[2189]: I0515 09:43:47.400937 2189 state_mem.go:35] "Initializing new in-memory state store" May 15 09:43:47.406044 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 09:43:47.418016 kubelet[2189]: E0515 09:43:47.417980 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:47.418453 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 09:43:47.421767 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 09:43:47.432806 kubelet[2189]: E0515 09:43:47.432756 2189 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 09:43:47.436517 kubelet[2189]: I0515 09:43:47.436479 2189 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:43:47.436743 kubelet[2189]: I0515 09:43:47.436719 2189 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 09:43:47.436787 kubelet[2189]: I0515 09:43:47.436739 2189 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:43:47.437230 kubelet[2189]: I0515 09:43:47.437010 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:43:47.438245 kubelet[2189]: E0515 09:43:47.438230 2189 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 09:43:47.519903 kubelet[2189]: E0515 09:43:47.519754 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" May 15 09:43:47.539037 kubelet[2189]: I0515 09:43:47.538999 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:47.539532 kubelet[2189]: E0515 09:43:47.539490 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 15 09:43:47.642803 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 09:43:47.655968 systemd[1]: Created slice kubepods-burstable-podaac74b7d3fe482cd60ce47ba1651cdb3.slice - libcontainer container kubepods-burstable-podaac74b7d3fe482cd60ce47ba1651cdb3.slice. May 15 09:43:47.660129 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 09:43:47.720081 kubelet[2189]: I0515 09:43:47.720045 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:47.720081 kubelet[2189]: I0515 09:43:47.720084 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:47.720214 kubelet[2189]: I0515 09:43:47.720106 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:47.720214 kubelet[2189]: I0515 09:43:47.720133 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:47.720214 kubelet[2189]: I0515 09:43:47.720150 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:47.720214 kubelet[2189]: I0515 09:43:47.720165 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 09:43:47.720214 kubelet[2189]: I0515 09:43:47.720179 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:47.720313 kubelet[2189]: I0515 09:43:47.720192 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:47.720313 kubelet[2189]: I0515 09:43:47.720211 2189 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:47.741338 kubelet[2189]: I0515 09:43:47.741297 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:47.741659 kubelet[2189]: E0515 09:43:47.741633 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 15 09:43:47.921184 kubelet[2189]: E0515 09:43:47.921135 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" May 15 09:43:47.954796 kubelet[2189]: E0515 09:43:47.954711 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:47.955468 containerd[1470]: time="2025-05-15T09:43:47.955421649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 09:43:47.958640 kubelet[2189]: E0515 09:43:47.958598 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:47.959324 containerd[1470]: time="2025-05-15T09:43:47.959188191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aac74b7d3fe482cd60ce47ba1651cdb3,Namespace:kube-system,Attempt:0,}" May 15 09:43:47.962771 kubelet[2189]: E0515 09:43:47.962736 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:47.964437 containerd[1470]: time="2025-05-15T09:43:47.964393177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 09:43:48.131614 kubelet[2189]: W0515 09:43:48.131527 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:48.131744 kubelet[2189]: E0515 09:43:48.131623 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:48.142970 kubelet[2189]: I0515 09:43:48.142931 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:48.143343 kubelet[2189]: E0515 09:43:48.143292 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 15 09:43:48.240285 kubelet[2189]: W0515 09:43:48.240123 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:48.240285 kubelet[2189]: E0515 09:43:48.240192 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:48.440012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448157456.mount: Deactivated successfully. May 15 09:43:48.445301 containerd[1470]: time="2025-05-15T09:43:48.445247355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:43:48.448082 containerd[1470]: time="2025-05-15T09:43:48.448022034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 09:43:48.448598 containerd[1470]: time="2025-05-15T09:43:48.448556646Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:43:48.449385 containerd[1470]: time="2025-05-15T09:43:48.449364105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:43:48.450023 containerd[1470]: time="2025-05-15T09:43:48.449841748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:43:48.450565 containerd[1470]: time="2025-05-15T09:43:48.450517144Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:43:48.451619 containerd[1470]: time="2025-05-15T09:43:48.451517357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:43:48.453605 containerd[1470]: time="2025-05-15T09:43:48.453276180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:43:48.457033 containerd[1470]: time="2025-05-15T09:43:48.456979339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.409647ms" May 15 09:43:48.457604 containerd[1470]: time="2025-05-15T09:43:48.457356884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.85214ms" May 15 09:43:48.463296 containerd[1470]: time="2025-05-15T09:43:48.463253421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.993337ms" May 15 09:43:48.638973 containerd[1470]: time="2025-05-15T09:43:48.638640953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:48.638973 containerd[1470]: time="2025-05-15T09:43:48.638711486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:48.638973 containerd[1470]: time="2025-05-15T09:43:48.638724448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.638973 containerd[1470]: time="2025-05-15T09:43:48.638804742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.640017 containerd[1470]: time="2025-05-15T09:43:48.639908452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:48.640017 containerd[1470]: time="2025-05-15T09:43:48.639982425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:48.640017 containerd[1470]: time="2025-05-15T09:43:48.640002548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.640205 containerd[1470]: time="2025-05-15T09:43:48.640093564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.641321 containerd[1470]: time="2025-05-15T09:43:48.641199995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:48.641321 containerd[1470]: time="2025-05-15T09:43:48.641258005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:48.641321 containerd[1470]: time="2025-05-15T09:43:48.641276768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.641496 containerd[1470]: time="2025-05-15T09:43:48.641346020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:48.664851 systemd[1]: Started cri-containerd-4e561e12ac3629c188318492c5b34d807676e6eb72ec8526602906fe13ba311f.scope - libcontainer container 4e561e12ac3629c188318492c5b34d807676e6eb72ec8526602906fe13ba311f. May 15 09:43:48.666271 systemd[1]: Started cri-containerd-db3002095be1e733d11fe25e8cbe316f93e1f0d464b51bfedcd0b25dd293f672.scope - libcontainer container db3002095be1e733d11fe25e8cbe316f93e1f0d464b51bfedcd0b25dd293f672. May 15 09:43:48.670053 systemd[1]: Started cri-containerd-737e3fae7826acc32d030dd8e0e9b94c33a2583928cfa248a1b027a7c90cb91d.scope - libcontainer container 737e3fae7826acc32d030dd8e0e9b94c33a2583928cfa248a1b027a7c90cb91d. May 15 09:43:48.686591 kubelet[2189]: W0515 09:43:48.686517 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:48.686938 kubelet[2189]: E0515 09:43:48.686617 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:48.697198 containerd[1470]: time="2025-05-15T09:43:48.696996579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e561e12ac3629c188318492c5b34d807676e6eb72ec8526602906fe13ba311f\"" May 15 09:43:48.698856 kubelet[2189]: E0515 09:43:48.698831 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:48.703171 containerd[1470]: time="2025-05-15T09:43:48.703058585Z" level=info msg="CreateContainer within sandbox \"4e561e12ac3629c188318492c5b34d807676e6eb72ec8526602906fe13ba311f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 09:43:48.706379 containerd[1470]: time="2025-05-15T09:43:48.706145597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aac74b7d3fe482cd60ce47ba1651cdb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"737e3fae7826acc32d030dd8e0e9b94c33a2583928cfa248a1b027a7c90cb91d\"" May 15 09:43:48.706734 containerd[1470]: time="2025-05-15T09:43:48.706706494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db3002095be1e733d11fe25e8cbe316f93e1f0d464b51bfedcd0b25dd293f672\"" May 15 09:43:48.707090 kubelet[2189]: E0515 09:43:48.706922 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:48.708016 kubelet[2189]: E0515 09:43:48.707954 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:48.708814 containerd[1470]: time="2025-05-15T09:43:48.708785253Z" level=info msg="CreateContainer within sandbox \"737e3fae7826acc32d030dd8e0e9b94c33a2583928cfa248a1b027a7c90cb91d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 09:43:48.710160 containerd[1470]: time="2025-05-15T09:43:48.710129724Z" level=info msg="CreateContainer within sandbox \"db3002095be1e733d11fe25e8cbe316f93e1f0d464b51bfedcd0b25dd293f672\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 09:43:48.718057 containerd[1470]: time="2025-05-15T09:43:48.718003163Z" level=info msg="CreateContainer within sandbox \"4e561e12ac3629c188318492c5b34d807676e6eb72ec8526602906fe13ba311f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"84d7a9fc9cc264446e6a4d53382dbb8abbce148a9d2005532a1094e21b0df601\"" May 15 09:43:48.719001 containerd[1470]: time="2025-05-15T09:43:48.718689721Z" level=info msg="StartContainer for \"84d7a9fc9cc264446e6a4d53382dbb8abbce148a9d2005532a1094e21b0df601\"" May 15 09:43:48.721927 kubelet[2189]: E0515 09:43:48.721880 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" May 15 09:43:48.725407 containerd[1470]: time="2025-05-15T09:43:48.725349550Z" level=info msg="CreateContainer within sandbox \"db3002095be1e733d11fe25e8cbe316f93e1f0d464b51bfedcd0b25dd293f672\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34787fda61c168f9670b8d0c8216ff8cf521c2b5129dc5adfdacc4232b8e6183\"" May 15 09:43:48.725988 containerd[1470]: time="2025-05-15T09:43:48.725961575Z" level=info msg="StartContainer for \"34787fda61c168f9670b8d0c8216ff8cf521c2b5129dc5adfdacc4232b8e6183\"" May 15 09:43:48.729297 containerd[1470]: time="2025-05-15T09:43:48.729258584Z" level=info msg="CreateContainer within sandbox \"737e3fae7826acc32d030dd8e0e9b94c33a2583928cfa248a1b027a7c90cb91d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95726a7b0f9c9d7c1283a4ac6b1b1960f77519f329e1d0bb3ce301dd737fbc1b\"" May 15 09:43:48.729784 containerd[1470]: time="2025-05-15T09:43:48.729757350Z" level=info msg="StartContainer for \"95726a7b0f9c9d7c1283a4ac6b1b1960f77519f329e1d0bb3ce301dd737fbc1b\"" May 15 09:43:48.742776 systemd[1]: Started cri-containerd-84d7a9fc9cc264446e6a4d53382dbb8abbce148a9d2005532a1094e21b0df601.scope - libcontainer container 84d7a9fc9cc264446e6a4d53382dbb8abbce148a9d2005532a1094e21b0df601. May 15 09:43:48.747396 systemd[1]: Started cri-containerd-34787fda61c168f9670b8d0c8216ff8cf521c2b5129dc5adfdacc4232b8e6183.scope - libcontainer container 34787fda61c168f9670b8d0c8216ff8cf521c2b5129dc5adfdacc4232b8e6183. May 15 09:43:48.754741 systemd[1]: Started cri-containerd-95726a7b0f9c9d7c1283a4ac6b1b1960f77519f329e1d0bb3ce301dd737fbc1b.scope - libcontainer container 95726a7b0f9c9d7c1283a4ac6b1b1960f77519f329e1d0bb3ce301dd737fbc1b. May 15 09:43:48.787629 containerd[1470]: time="2025-05-15T09:43:48.787586765Z" level=info msg="StartContainer for \"84d7a9fc9cc264446e6a4d53382dbb8abbce148a9d2005532a1094e21b0df601\" returns successfully" May 15 09:43:48.787916 containerd[1470]: time="2025-05-15T09:43:48.787696704Z" level=info msg="StartContainer for \"34787fda61c168f9670b8d0c8216ff8cf521c2b5129dc5adfdacc4232b8e6183\" returns successfully" May 15 09:43:48.806156 containerd[1470]: time="2025-05-15T09:43:48.806029986Z" level=info msg="StartContainer for \"95726a7b0f9c9d7c1283a4ac6b1b1960f77519f329e1d0bb3ce301dd737fbc1b\" returns successfully" May 15 09:43:48.886817 kubelet[2189]: W0515 09:43:48.885990 2189 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 15 09:43:48.886817 kubelet[2189]: E0515 09:43:48.886047 2189 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 15 09:43:48.946041 kubelet[2189]: I0515 09:43:48.944886 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:48.946041 kubelet[2189]: E0515 09:43:48.945217 2189 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 15 09:43:49.344055 kubelet[2189]: E0515 09:43:49.343957 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:49.346820 kubelet[2189]: E0515 09:43:49.346256 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:49.349044 kubelet[2189]: E0515 09:43:49.348903 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:50.326894 kubelet[2189]: E0515 09:43:50.326853 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 09:43:50.351729 kubelet[2189]: E0515 09:43:50.351701 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:50.539088 kubelet[2189]: E0515 09:43:50.539054 2189 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 09:43:50.547612 kubelet[2189]: I0515 09:43:50.547304 2189 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:50.557292 kubelet[2189]: I0515 09:43:50.557262 2189 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 09:43:50.557724 kubelet[2189]: E0515 09:43:50.557454 2189 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 09:43:50.569729 kubelet[2189]: E0515 09:43:50.569699 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:50.670733 kubelet[2189]: E0515 09:43:50.670686 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:50.771170 kubelet[2189]: E0515 09:43:50.771121 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:50.872206 kubelet[2189]: E0515 09:43:50.872163 2189 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:51.305775 kubelet[2189]: I0515 09:43:51.305731 2189 apiserver.go:52] "Watching apiserver" May 15 09:43:51.319362 kubelet[2189]: I0515 09:43:51.319331 2189 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 09:43:52.418752 systemd[1]: Reloading requested from client PID 2462 ('systemctl') (unit session-7.scope)... May 15 09:43:52.418771 systemd[1]: Reloading... May 15 09:43:52.473722 zram_generator::config[2501]: No configuration found. May 15 09:43:52.568344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:43:52.632828 systemd[1]: Reloading finished in 213 ms. May 15 09:43:52.671275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:52.690547 systemd[1]: kubelet.service: Deactivated successfully. May 15 09:43:52.690831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:52.690888 systemd[1]: kubelet.service: Consumed 1.230s CPU time, 121.5M memory peak, 0B memory swap peak. May 15 09:43:52.702835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:43:52.798799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:43:52.801821 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:43:52.837611 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:43:52.837611 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 09:43:52.837611 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:43:52.837946 kubelet[2543]: I0515 09:43:52.837676 2543 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:43:52.845684 kubelet[2543]: I0515 09:43:52.845326 2543 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 09:43:52.845684 kubelet[2543]: I0515 09:43:52.845363 2543 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:43:52.845858 kubelet[2543]: I0515 09:43:52.845827 2543 server.go:929] "Client rotation is on, will bootstrap in background" May 15 09:43:52.847953 kubelet[2543]: I0515 09:43:52.847697 2543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 09:43:52.849778 kubelet[2543]: I0515 09:43:52.849625 2543 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:43:52.852500 kubelet[2543]: E0515 09:43:52.852464 2543 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 09:43:52.853142 kubelet[2543]: I0515 09:43:52.852648 2543 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 09:43:52.855225 kubelet[2543]: I0515 09:43:52.855186 2543 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:43:52.855354 kubelet[2543]: I0515 09:43:52.855338 2543 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 09:43:52.855485 kubelet[2543]: I0515 09:43:52.855446 2543 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:43:52.855688 kubelet[2543]: I0515 09:43:52.855477 2543 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 09:43:52.855765 kubelet[2543]: I0515 09:43:52.855693 2543 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:43:52.855765 kubelet[2543]: I0515 09:43:52.855704 2543 container_manager_linux.go:300] "Creating device plugin manager" May 15 09:43:52.855765 kubelet[2543]: I0515 09:43:52.855735 2543 state_mem.go:36] "Initialized new in-memory state store" May 15 09:43:52.855867 kubelet[2543]: I0515 09:43:52.855856 2543 kubelet.go:408] "Attempting to sync node with API server" May 15 09:43:52.855898 kubelet[2543]: I0515 09:43:52.855880 2543 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:43:52.855934 kubelet[2543]: I0515 09:43:52.855903 2543 kubelet.go:314] "Adding apiserver pod source" May 15 09:43:52.855934 kubelet[2543]: I0515 09:43:52.855915 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:43:52.859439 kubelet[2543]: I0515 09:43:52.856688 2543 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:43:52.859439 kubelet[2543]: I0515 09:43:52.858611 2543 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:43:52.862203 kubelet[2543]: I0515 09:43:52.860436 2543 server.go:1269] "Started kubelet" May 15 09:43:52.862755 kubelet[2543]: I0515 09:43:52.862478 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:43:52.862846 kubelet[2543]: I0515 09:43:52.862799 2543 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:43:52.862870 kubelet[2543]: I0515 09:43:52.862856 2543 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:43:52.863437 kubelet[2543]: I0515 09:43:52.863421 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:43:52.864021 kubelet[2543]: I0515 09:43:52.863986 2543 server.go:460] "Adding debug handlers to kubelet server" May 15 09:43:52.864940 kubelet[2543]: I0515 09:43:52.864913 2543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 09:43:52.868580 kubelet[2543]: I0515 09:43:52.866607 2543 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 09:43:52.868580 kubelet[2543]: I0515 09:43:52.866744 2543 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 09:43:52.868580 kubelet[2543]: I0515 09:43:52.866909 2543 reconciler.go:26] "Reconciler: start to sync state" May 15 09:43:52.868580 kubelet[2543]: E0515 09:43:52.866745 2543 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:43:52.871759 kubelet[2543]: I0515 09:43:52.871734 2543 factory.go:221] Registration of the systemd container factory successfully May 15 09:43:52.872370 kubelet[2543]: I0515 09:43:52.872050 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:43:52.873115 kubelet[2543]: E0515 09:43:52.873080 2543 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:43:52.874387 kubelet[2543]: I0515 09:43:52.874358 2543 factory.go:221] Registration of the containerd container factory successfully May 15 09:43:52.883405 kubelet[2543]: I0515 09:43:52.883358 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:43:52.886557 kubelet[2543]: I0515 09:43:52.886517 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:43:52.886557 kubelet[2543]: I0515 09:43:52.886556 2543 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 09:43:52.886694 kubelet[2543]: I0515 09:43:52.886597 2543 kubelet.go:2321] "Starting kubelet main sync loop" May 15 09:43:52.886694 kubelet[2543]: E0515 09:43:52.886657 2543 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:43:52.913615 kubelet[2543]: I0515 09:43:52.913582 2543 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 09:43:52.913615 kubelet[2543]: I0515 09:43:52.913609 2543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 09:43:52.913770 kubelet[2543]: I0515 09:43:52.913632 2543 state_mem.go:36] "Initialized new in-memory state store" May 15 09:43:52.913795 kubelet[2543]: I0515 09:43:52.913787 2543 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 09:43:52.913818 kubelet[2543]: I0515 09:43:52.913798 2543 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 09:43:52.913867 kubelet[2543]: I0515 09:43:52.913818 2543 policy_none.go:49] "None policy: Start" May 15 09:43:52.914568 kubelet[2543]: I0515 09:43:52.914549 2543 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 09:43:52.914568 kubelet[2543]: I0515 09:43:52.914583 2543 state_mem.go:35] "Initializing new in-memory state store" May 15 09:43:52.914753 kubelet[2543]: I0515 09:43:52.914737 2543 state_mem.go:75] "Updated machine memory state" May 15 09:43:52.918415 kubelet[2543]: I0515 09:43:52.918389 2543 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:43:52.918598 kubelet[2543]: I0515 09:43:52.918553 2543 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 09:43:52.918717 kubelet[2543]: I0515 09:43:52.918673 2543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:43:52.918904 kubelet[2543]: I0515 09:43:52.918873 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:43:53.022731 kubelet[2543]: I0515 09:43:53.022459 2543 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 09:43:53.029125 kubelet[2543]: I0515 09:43:53.029077 2543 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 09:43:53.029262 kubelet[2543]: I0515 09:43:53.029185 2543 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 09:43:53.067893 kubelet[2543]: I0515 09:43:53.067861 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:53.067893 kubelet[2543]: I0515 09:43:53.067898 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:53.068066 kubelet[2543]: I0515 09:43:53.067922 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 09:43:53.068066 kubelet[2543]: I0515 09:43:53.067952 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:53.068066 kubelet[2543]: I0515 09:43:53.067974 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:53.068066 kubelet[2543]: I0515 09:43:53.067989 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:53.068066 kubelet[2543]: I0515 09:43:53.068002 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:43:53.068172 kubelet[2543]: I0515 09:43:53.068020 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:53.068172 kubelet[2543]: I0515 09:43:53.068034 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aac74b7d3fe482cd60ce47ba1651cdb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aac74b7d3fe482cd60ce47ba1651cdb3\") " pod="kube-system/kube-apiserver-localhost" May 15 09:43:53.296032 kubelet[2543]: E0515 09:43:53.295903 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.296174 kubelet[2543]: E0515 09:43:53.296145 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.296662 kubelet[2543]: E0515 09:43:53.296590 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.402833 sudo[2582]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 09:43:53.403116 sudo[2582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 09:43:53.826309 sudo[2582]: pam_unix(sudo:session): session closed for user root May 15 09:43:53.856592 kubelet[2543]: I0515 09:43:53.856510 2543 apiserver.go:52] "Watching apiserver" May 15 09:43:53.867123 kubelet[2543]: I0515 09:43:53.867058 2543 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 09:43:53.902488 kubelet[2543]: E0515 09:43:53.902435 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.903267 kubelet[2543]: E0515 09:43:53.902805 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.912077 kubelet[2543]: E0515 09:43:53.912004 2543 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 09:43:53.912363 kubelet[2543]: E0515 09:43:53.912346 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:53.934118 kubelet[2543]: I0515 09:43:53.933713 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9336951519999999 podStartE2EDuration="1.933695152s" podCreationTimestamp="2025-05-15 09:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:43:53.922445836 +0000 UTC m=+1.117450022" watchObservedRunningTime="2025-05-15 09:43:53.933695152 +0000 UTC m=+1.128699338" May 15 09:43:53.954353 kubelet[2543]: I0515 09:43:53.954082 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9540636820000001 podStartE2EDuration="1.954063682s" podCreationTimestamp="2025-05-15 09:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:43:53.953319752 +0000 UTC m=+1.148323938" watchObservedRunningTime="2025-05-15 09:43:53.954063682 +0000 UTC m=+1.149067868" May 15 09:43:53.954353 kubelet[2543]: I0515 09:43:53.954229 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.954224612 podStartE2EDuration="1.954224612s" podCreationTimestamp="2025-05-15 09:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:43:53.936690113 +0000 UTC m=+1.131694379" watchObservedRunningTime="2025-05-15 09:43:53.954224612 +0000 UTC m=+1.149228798" May 15 09:43:54.903685 kubelet[2543]: E0515 09:43:54.903652 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:55.506844 sudo[1650]: pam_unix(sudo:session): session closed for user root May 15 09:43:55.507939 sshd[1649]: Connection closed by 10.0.0.1 port 59180 May 15 09:43:55.508355 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 15 09:43:55.511141 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:59180.service: Deactivated successfully. May 15 09:43:55.512993 systemd[1]: session-7.scope: Deactivated successfully. May 15 09:43:55.513153 systemd[1]: session-7.scope: Consumed 7.391s CPU time, 156.1M memory peak, 0B memory swap peak. May 15 09:43:55.514418 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. May 15 09:43:55.515638 systemd-logind[1451]: Removed session 7. May 15 09:43:57.854549 kubelet[2543]: I0515 09:43:57.854513 2543 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 09:43:57.855373 containerd[1470]: time="2025-05-15T09:43:57.855335749Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 09:43:57.855882 kubelet[2543]: I0515 09:43:57.855527 2543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 09:43:58.624332 systemd[1]: Created slice kubepods-besteffort-pod8033aa8c_c2be_456f_ba8c_291c7b64ebde.slice - libcontainer container kubepods-besteffort-pod8033aa8c_c2be_456f_ba8c_291c7b64ebde.slice. May 15 09:43:58.640915 systemd[1]: Created slice kubepods-burstable-podc41ced47_fd5a_4e63_a558_e0c2cd3beb89.slice - libcontainer container kubepods-burstable-podc41ced47_fd5a_4e63_a558_e0c2cd3beb89.slice. May 15 09:43:58.671847 kubelet[2543]: E0515 09:43:58.671817 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:58.703833 kubelet[2543]: I0515 09:43:58.703790 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hostproc\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.703833 kubelet[2543]: I0515 09:43:58.703832 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cni-path\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703848 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hubble-tls\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703867 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqblg\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-kube-api-access-lqblg\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703885 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-bpf-maps\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703900 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-cgroup\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703916 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8033aa8c-c2be-456f-ba8c-291c7b64ebde-xtables-lock\") pod \"kube-proxy-cz5nw\" (UID: \"8033aa8c-c2be-456f-ba8c-291c7b64ebde\") " pod="kube-system/kube-proxy-cz5nw" May 15 09:43:58.704115 kubelet[2543]: I0515 09:43:58.703941 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-kernel\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.703960 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-etc-cni-netd\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.703986 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-xtables-lock\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.704012 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8033aa8c-c2be-456f-ba8c-291c7b64ebde-kube-proxy\") pod \"kube-proxy-cz5nw\" (UID: \"8033aa8c-c2be-456f-ba8c-291c7b64ebde\") " pod="kube-system/kube-proxy-cz5nw" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.704026 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-run\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.704047 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-lib-modules\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704251 kubelet[2543]: I0515 09:43:58.704080 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-clustermesh-secrets\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704361 kubelet[2543]: I0515 09:43:58.704121 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-config-path\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704361 kubelet[2543]: I0515 09:43:58.704159 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-net\") pod \"cilium-b8sj9\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " pod="kube-system/cilium-b8sj9" May 15 09:43:58.704361 kubelet[2543]: I0515 09:43:58.704208 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8033aa8c-c2be-456f-ba8c-291c7b64ebde-lib-modules\") pod \"kube-proxy-cz5nw\" (UID: \"8033aa8c-c2be-456f-ba8c-291c7b64ebde\") " pod="kube-system/kube-proxy-cz5nw" May 15 09:43:58.704361 kubelet[2543]: I0515 09:43:58.704227 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwzm\" (UniqueName: \"kubernetes.io/projected/8033aa8c-c2be-456f-ba8c-291c7b64ebde-kube-api-access-6cwzm\") pod \"kube-proxy-cz5nw\" (UID: \"8033aa8c-c2be-456f-ba8c-291c7b64ebde\") " pod="kube-system/kube-proxy-cz5nw" May 15 09:43:58.910313 kubelet[2543]: E0515 09:43:58.910148 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:58.940915 kubelet[2543]: E0515 09:43:58.940303 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:58.941360 containerd[1470]: time="2025-05-15T09:43:58.941329005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cz5nw,Uid:8033aa8c-c2be-456f-ba8c-291c7b64ebde,Namespace:kube-system,Attempt:0,}" May 15 09:43:58.945261 kubelet[2543]: E0515 09:43:58.945236 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:58.946457 containerd[1470]: time="2025-05-15T09:43:58.945617063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8sj9,Uid:c41ced47-fd5a-4e63-a558-e0c2cd3beb89,Namespace:kube-system,Attempt:0,}" May 15 09:43:58.963257 kubelet[2543]: E0515 09:43:58.963092 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:58.969614 systemd[1]: Created slice kubepods-besteffort-pod02528fde_a7fa_4469_a65d_322d4dc5bffd.slice - libcontainer container kubepods-besteffort-pod02528fde_a7fa_4469_a65d_322d4dc5bffd.slice. May 15 09:43:58.977322 containerd[1470]: time="2025-05-15T09:43:58.977235227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:58.979244 containerd[1470]: time="2025-05-15T09:43:58.978673140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:58.979244 containerd[1470]: time="2025-05-15T09:43:58.978798147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:58.979244 containerd[1470]: time="2025-05-15T09:43:58.979096122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:59.002255 containerd[1470]: time="2025-05-15T09:43:58.996790460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:59.002255 containerd[1470]: time="2025-05-15T09:43:58.996856423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:59.002255 containerd[1470]: time="2025-05-15T09:43:58.996872704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:59.002255 containerd[1470]: time="2025-05-15T09:43:58.996956308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:59.006407 kubelet[2543]: I0515 09:43:59.006374 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02528fde-a7fa-4469-a65d-322d4dc5bffd-cilium-config-path\") pod \"cilium-operator-5d85765b45-cvlbc\" (UID: \"02528fde-a7fa-4469-a65d-322d4dc5bffd\") " pod="kube-system/cilium-operator-5d85765b45-cvlbc" May 15 09:43:59.006407 kubelet[2543]: I0515 09:43:59.006418 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvdp\" (UniqueName: \"kubernetes.io/projected/02528fde-a7fa-4469-a65d-322d4dc5bffd-kube-api-access-sjvdp\") pod \"cilium-operator-5d85765b45-cvlbc\" (UID: \"02528fde-a7fa-4469-a65d-322d4dc5bffd\") " pod="kube-system/cilium-operator-5d85765b45-cvlbc" May 15 09:43:59.019744 systemd[1]: Started cri-containerd-8037f84f29f519ac93986dd98e2466995ab58d5d63aaa8c30e5d84735278817c.scope - libcontainer container 8037f84f29f519ac93986dd98e2466995ab58d5d63aaa8c30e5d84735278817c. May 15 09:43:59.022507 systemd[1]: Started cri-containerd-76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190.scope - libcontainer container 76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190. May 15 09:43:59.044084 containerd[1470]: time="2025-05-15T09:43:59.044038260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cz5nw,Uid:8033aa8c-c2be-456f-ba8c-291c7b64ebde,Namespace:kube-system,Attempt:0,} returns sandbox id \"8037f84f29f519ac93986dd98e2466995ab58d5d63aaa8c30e5d84735278817c\"" May 15 09:43:59.044967 kubelet[2543]: E0515 09:43:59.044692 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.047481 containerd[1470]: time="2025-05-15T09:43:59.047452824Z" level=info msg="CreateContainer within sandbox \"8037f84f29f519ac93986dd98e2466995ab58d5d63aaa8c30e5d84735278817c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 09:43:59.048040 containerd[1470]: time="2025-05-15T09:43:59.047995250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8sj9,Uid:c41ced47-fd5a-4e63-a558-e0c2cd3beb89,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\"" May 15 09:43:59.049150 kubelet[2543]: E0515 09:43:59.049106 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.050394 containerd[1470]: time="2025-05-15T09:43:59.050116152Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 09:43:59.075193 containerd[1470]: time="2025-05-15T09:43:59.075152595Z" level=info msg="CreateContainer within sandbox \"8037f84f29f519ac93986dd98e2466995ab58d5d63aaa8c30e5d84735278817c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"943aaec9fe70a2f4e52f6fb6b40d7748daeaa1a7a2037be567c97a9529c10026\"" May 15 09:43:59.075857 containerd[1470]: time="2025-05-15T09:43:59.075830788Z" level=info msg="StartContainer for \"943aaec9fe70a2f4e52f6fb6b40d7748daeaa1a7a2037be567c97a9529c10026\"" May 15 09:43:59.100727 systemd[1]: Started cri-containerd-943aaec9fe70a2f4e52f6fb6b40d7748daeaa1a7a2037be567c97a9529c10026.scope - libcontainer container 943aaec9fe70a2f4e52f6fb6b40d7748daeaa1a7a2037be567c97a9529c10026. May 15 09:43:59.129867 containerd[1470]: time="2025-05-15T09:43:59.129824902Z" level=info msg="StartContainer for \"943aaec9fe70a2f4e52f6fb6b40d7748daeaa1a7a2037be567c97a9529c10026\" returns successfully" May 15 09:43:59.275128 kubelet[2543]: E0515 09:43:59.275031 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.275804 containerd[1470]: time="2025-05-15T09:43:59.275766114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvlbc,Uid:02528fde-a7fa-4469-a65d-322d4dc5bffd,Namespace:kube-system,Attempt:0,}" May 15 09:43:59.307783 containerd[1470]: time="2025-05-15T09:43:59.307186464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:43:59.307783 containerd[1470]: time="2025-05-15T09:43:59.307246747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:43:59.307783 containerd[1470]: time="2025-05-15T09:43:59.307261948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:59.307783 containerd[1470]: time="2025-05-15T09:43:59.307328551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:43:59.326525 systemd[1]: Started cri-containerd-9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e.scope - libcontainer container 9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e. May 15 09:43:59.353676 containerd[1470]: time="2025-05-15T09:43:59.353444607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cvlbc,Uid:02528fde-a7fa-4469-a65d-322d4dc5bffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\"" May 15 09:43:59.354107 kubelet[2543]: E0515 09:43:59.353977 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.663273 kubelet[2543]: E0515 09:43:59.663245 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.914375 kubelet[2543]: E0515 09:43:59.914207 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.916626 kubelet[2543]: E0515 09:43:59.915945 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.917269 kubelet[2543]: E0515 09:43:59.917243 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:43:59.924794 kubelet[2543]: I0515 09:43:59.923279 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cz5nw" podStartSLOduration=1.923267705 podStartE2EDuration="1.923267705s" podCreationTimestamp="2025-05-15 09:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:43:59.922986452 +0000 UTC m=+7.117990638" watchObservedRunningTime="2025-05-15 09:43:59.923267705 +0000 UTC m=+7.118271891" May 15 09:44:03.582956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050398470.mount: Deactivated successfully. May 15 09:44:05.140880 update_engine[1457]: I20250515 09:44:05.139086 1457 update_attempter.cc:509] Updating boot flags... May 15 09:44:05.175736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2934) May 15 09:44:05.217515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2937) May 15 09:44:05.234612 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2937) May 15 09:44:06.278003 containerd[1470]: time="2025-05-15T09:44:06.277949024Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:44:06.278458 containerd[1470]: time="2025-05-15T09:44:06.278407720Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 09:44:06.279182 containerd[1470]: time="2025-05-15T09:44:06.279141824Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:44:06.280622 containerd[1470]: time="2025-05-15T09:44:06.280590232Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.230427278s" May 15 09:44:06.280702 containerd[1470]: time="2025-05-15T09:44:06.280624593Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 09:44:06.286326 containerd[1470]: time="2025-05-15T09:44:06.286288942Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 09:44:06.287813 containerd[1470]: time="2025-05-15T09:44:06.287783832Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:44:06.326836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097567539.mount: Deactivated successfully. May 15 09:44:06.328349 containerd[1470]: time="2025-05-15T09:44:06.328312061Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\"" May 15 09:44:06.328779 containerd[1470]: time="2025-05-15T09:44:06.328714835Z" level=info msg="StartContainer for \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\"" May 15 09:44:06.359728 systemd[1]: Started cri-containerd-0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c.scope - libcontainer container 0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c. May 15 09:44:06.383824 containerd[1470]: time="2025-05-15T09:44:06.383789069Z" level=info msg="StartContainer for \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\" returns successfully" May 15 09:44:06.427792 systemd[1]: cri-containerd-0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c.scope: Deactivated successfully. May 15 09:44:06.587112 containerd[1470]: time="2025-05-15T09:44:06.581908626Z" level=info msg="shim disconnected" id=0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c namespace=k8s.io May 15 09:44:06.587112 containerd[1470]: time="2025-05-15T09:44:06.587048157Z" level=warning msg="cleaning up after shim disconnected" id=0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c namespace=k8s.io May 15 09:44:06.587112 containerd[1470]: time="2025-05-15T09:44:06.587061397Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:44:06.938890 kubelet[2543]: E0515 09:44:06.938857 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:06.941103 containerd[1470]: time="2025-05-15T09:44:06.941057345Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:44:06.958495 containerd[1470]: time="2025-05-15T09:44:06.958444884Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\"" May 15 09:44:06.959585 containerd[1470]: time="2025-05-15T09:44:06.959538160Z" level=info msg="StartContainer for \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\"" May 15 09:44:06.985719 systemd[1]: Started cri-containerd-720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75.scope - libcontainer container 720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75. May 15 09:44:07.005650 containerd[1470]: time="2025-05-15T09:44:07.005606408Z" level=info msg="StartContainer for \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\" returns successfully" May 15 09:44:07.021687 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:44:07.022003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:44:07.022069 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 09:44:07.029981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:44:07.030220 systemd[1]: cri-containerd-720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75.scope: Deactivated successfully. May 15 09:44:07.042219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:44:07.049666 containerd[1470]: time="2025-05-15T09:44:07.049428116Z" level=info msg="shim disconnected" id=720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75 namespace=k8s.io May 15 09:44:07.049666 containerd[1470]: time="2025-05-15T09:44:07.049508279Z" level=warning msg="cleaning up after shim disconnected" id=720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75 namespace=k8s.io May 15 09:44:07.049666 containerd[1470]: time="2025-05-15T09:44:07.049517239Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:44:07.315129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c-rootfs.mount: Deactivated successfully. May 15 09:44:07.806838 containerd[1470]: time="2025-05-15T09:44:07.806044129Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:44:07.808084 containerd[1470]: time="2025-05-15T09:44:07.808026752Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 09:44:07.809062 containerd[1470]: time="2025-05-15T09:44:07.809022423Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:44:07.810361 containerd[1470]: time="2025-05-15T09:44:07.810236342Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.523909679s" May 15 09:44:07.811058 containerd[1470]: time="2025-05-15T09:44:07.811034967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 09:44:07.814276 containerd[1470]: time="2025-05-15T09:44:07.814249109Z" level=info msg="CreateContainer within sandbox \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 09:44:07.824074 containerd[1470]: time="2025-05-15T09:44:07.824038779Z" level=info msg="CreateContainer within sandbox \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\"" May 15 09:44:07.825625 containerd[1470]: time="2025-05-15T09:44:07.824871726Z" level=info msg="StartContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\"" May 15 09:44:07.851747 systemd[1]: Started cri-containerd-ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a.scope - libcontainer container ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a. May 15 09:44:07.875102 containerd[1470]: time="2025-05-15T09:44:07.874900911Z" level=info msg="StartContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" returns successfully" May 15 09:44:07.941785 kubelet[2543]: E0515 09:44:07.941209 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:07.945394 kubelet[2543]: E0515 09:44:07.945370 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:07.950857 containerd[1470]: time="2025-05-15T09:44:07.950764954Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:44:07.956740 kubelet[2543]: I0515 09:44:07.956529 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cvlbc" podStartSLOduration=1.499153243 podStartE2EDuration="9.956512897s" podCreationTimestamp="2025-05-15 09:43:58 +0000 UTC" firstStartedPulling="2025-05-15 09:43:59.354599542 +0000 UTC m=+6.549603728" lastFinishedPulling="2025-05-15 09:44:07.811959196 +0000 UTC m=+15.006963382" observedRunningTime="2025-05-15 09:44:07.955547466 +0000 UTC m=+15.150551652" watchObservedRunningTime="2025-05-15 09:44:07.956512897 +0000 UTC m=+15.151517083" May 15 09:44:08.038901 containerd[1470]: time="2025-05-15T09:44:08.038779447Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\"" May 15 09:44:08.039893 containerd[1470]: time="2025-05-15T09:44:08.039740596Z" level=info msg="StartContainer for \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\"" May 15 09:44:08.093778 systemd[1]: Started cri-containerd-0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300.scope - libcontainer container 0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300. May 15 09:44:08.134849 containerd[1470]: time="2025-05-15T09:44:08.134797704Z" level=info msg="StartContainer for \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\" returns successfully" May 15 09:44:08.150319 systemd[1]: cri-containerd-0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300.scope: Deactivated successfully. May 15 09:44:08.175125 containerd[1470]: time="2025-05-15T09:44:08.175062519Z" level=info msg="shim disconnected" id=0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300 namespace=k8s.io May 15 09:44:08.175125 containerd[1470]: time="2025-05-15T09:44:08.175126080Z" level=warning msg="cleaning up after shim disconnected" id=0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300 namespace=k8s.io May 15 09:44:08.175358 containerd[1470]: time="2025-05-15T09:44:08.175137201Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:44:08.948995 kubelet[2543]: E0515 09:44:08.948616 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:08.948995 kubelet[2543]: E0515 09:44:08.948732 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:08.952452 containerd[1470]: time="2025-05-15T09:44:08.952404652Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:44:08.966555 containerd[1470]: time="2025-05-15T09:44:08.966516077Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\"" May 15 09:44:08.967900 containerd[1470]: time="2025-05-15T09:44:08.967134416Z" level=info msg="StartContainer for \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\"" May 15 09:44:09.001270 systemd[1]: Started cri-containerd-fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c.scope - libcontainer container fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c. May 15 09:44:09.022935 systemd[1]: cri-containerd-fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c.scope: Deactivated successfully. May 15 09:44:09.024034 containerd[1470]: time="2025-05-15T09:44:09.024006980Z" level=info msg="StartContainer for \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\" returns successfully" May 15 09:44:09.042810 containerd[1470]: time="2025-05-15T09:44:09.042642276Z" level=info msg="shim disconnected" id=fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c namespace=k8s.io May 15 09:44:09.042810 containerd[1470]: time="2025-05-15T09:44:09.042723758Z" level=warning msg="cleaning up after shim disconnected" id=fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c namespace=k8s.io May 15 09:44:09.042810 containerd[1470]: time="2025-05-15T09:44:09.042732359Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:44:09.315670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c-rootfs.mount: Deactivated successfully. May 15 09:44:09.955493 kubelet[2543]: E0515 09:44:09.955225 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:09.960176 containerd[1470]: time="2025-05-15T09:44:09.957353376Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:44:09.980022 containerd[1470]: time="2025-05-15T09:44:09.979919945Z" level=info msg="CreateContainer within sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\"" May 15 09:44:09.982362 containerd[1470]: time="2025-05-15T09:44:09.981769678Z" level=info msg="StartContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\"" May 15 09:44:10.009724 systemd[1]: Started cri-containerd-47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8.scope - libcontainer container 47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8. May 15 09:44:10.036598 containerd[1470]: time="2025-05-15T09:44:10.036220518Z" level=info msg="StartContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" returns successfully" May 15 09:44:10.172908 kubelet[2543]: I0515 09:44:10.172142 2543 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 09:44:10.210954 systemd[1]: Created slice kubepods-burstable-poda81dae4a_07da_4431_84f2_04fdefb4d1fd.slice - libcontainer container kubepods-burstable-poda81dae4a_07da_4431_84f2_04fdefb4d1fd.slice. May 15 09:44:10.227955 systemd[1]: Created slice kubepods-burstable-pod04d2cb3f_b1c9_43b4_9ed2_ccac6f759d62.slice - libcontainer container kubepods-burstable-pod04d2cb3f_b1c9_43b4_9ed2_ccac6f759d62.slice. May 15 09:44:10.309153 kubelet[2543]: I0515 09:44:10.309112 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a81dae4a-07da-4431-84f2-04fdefb4d1fd-config-volume\") pod \"coredns-6f6b679f8f-h4jh7\" (UID: \"a81dae4a-07da-4431-84f2-04fdefb4d1fd\") " pod="kube-system/coredns-6f6b679f8f-h4jh7" May 15 09:44:10.309153 kubelet[2543]: I0515 09:44:10.309157 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmbj\" (UniqueName: \"kubernetes.io/projected/04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62-kube-api-access-7mmbj\") pod \"coredns-6f6b679f8f-vj8cp\" (UID: \"04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62\") " pod="kube-system/coredns-6f6b679f8f-vj8cp" May 15 09:44:10.309303 kubelet[2543]: I0515 09:44:10.309186 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pw92\" (UniqueName: \"kubernetes.io/projected/a81dae4a-07da-4431-84f2-04fdefb4d1fd-kube-api-access-9pw92\") pod \"coredns-6f6b679f8f-h4jh7\" (UID: \"a81dae4a-07da-4431-84f2-04fdefb4d1fd\") " pod="kube-system/coredns-6f6b679f8f-h4jh7" May 15 09:44:10.309303 kubelet[2543]: I0515 09:44:10.309208 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62-config-volume\") pod \"coredns-6f6b679f8f-vj8cp\" (UID: \"04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62\") " pod="kube-system/coredns-6f6b679f8f-vj8cp" May 15 09:44:10.524124 kubelet[2543]: E0515 09:44:10.523797 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:10.525012 containerd[1470]: time="2025-05-15T09:44:10.524976760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h4jh7,Uid:a81dae4a-07da-4431-84f2-04fdefb4d1fd,Namespace:kube-system,Attempt:0,}" May 15 09:44:10.530661 kubelet[2543]: E0515 09:44:10.530624 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:10.531248 containerd[1470]: time="2025-05-15T09:44:10.531024806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vj8cp,Uid:04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62,Namespace:kube-system,Attempt:0,}" May 15 09:44:10.959078 kubelet[2543]: E0515 09:44:10.959032 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:11.961012 kubelet[2543]: E0515 09:44:11.960976 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:12.250722 systemd-networkd[1387]: cilium_host: Link UP May 15 09:44:12.250849 systemd-networkd[1387]: cilium_net: Link UP May 15 09:44:12.250851 systemd-networkd[1387]: cilium_net: Gained carrier May 15 09:44:12.251006 systemd-networkd[1387]: cilium_host: Gained carrier May 15 09:44:12.325667 systemd-networkd[1387]: cilium_vxlan: Link UP May 15 09:44:12.325673 systemd-networkd[1387]: cilium_vxlan: Gained carrier May 15 09:44:12.582684 systemd-networkd[1387]: cilium_host: Gained IPv6LL May 15 09:44:12.615605 kernel: NET: Registered PF_ALG protocol family May 15 09:44:12.961648 kubelet[2543]: E0515 09:44:12.961616 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:13.134676 systemd-networkd[1387]: cilium_net: Gained IPv6LL May 15 09:44:13.165460 systemd-networkd[1387]: lxc_health: Link UP May 15 09:44:13.166926 systemd-networkd[1387]: lxc_health: Gained carrier May 15 09:44:13.676585 systemd-networkd[1387]: lxc9b7eb41a24fa: Link UP May 15 09:44:13.686658 kernel: eth0: renamed from tmp14ded May 15 09:44:13.694033 systemd-networkd[1387]: lxc4b15f33ab1b8: Link UP May 15 09:44:13.702539 systemd-networkd[1387]: lxc9b7eb41a24fa: Gained carrier May 15 09:44:13.707590 kernel: eth0: renamed from tmp43a32 May 15 09:44:13.713080 systemd-networkd[1387]: lxc4b15f33ab1b8: Gained carrier May 15 09:44:14.094818 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL May 15 09:44:14.926830 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 15 09:44:14.956171 kubelet[2543]: E0515 09:44:14.956138 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:14.968517 kubelet[2543]: E0515 09:44:14.967535 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:14.977987 kubelet[2543]: I0515 09:44:14.977781 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b8sj9" podStartSLOduration=9.742952469 podStartE2EDuration="16.977766059s" podCreationTimestamp="2025-05-15 09:43:58 +0000 UTC" firstStartedPulling="2025-05-15 09:43:59.049753695 +0000 UTC m=+6.244757881" lastFinishedPulling="2025-05-15 09:44:06.284567325 +0000 UTC m=+13.479571471" observedRunningTime="2025-05-15 09:44:10.976009329 +0000 UTC m=+18.171013475" watchObservedRunningTime="2025-05-15 09:44:14.977766059 +0000 UTC m=+22.172770245" May 15 09:44:15.054763 systemd-networkd[1387]: lxc9b7eb41a24fa: Gained IPv6LL May 15 09:44:15.438701 systemd-networkd[1387]: lxc4b15f33ab1b8: Gained IPv6LL May 15 09:44:15.967956 kubelet[2543]: E0515 09:44:15.967928 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:17.134143 containerd[1470]: time="2025-05-15T09:44:17.134021611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:44:17.135161 containerd[1470]: time="2025-05-15T09:44:17.135008151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:44:17.135161 containerd[1470]: time="2025-05-15T09:44:17.135080632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:44:17.135161 containerd[1470]: time="2025-05-15T09:44:17.135097073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:44:17.135683 containerd[1470]: time="2025-05-15T09:44:17.134393939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:44:17.135683 containerd[1470]: time="2025-05-15T09:44:17.135429360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:44:17.135683 containerd[1470]: time="2025-05-15T09:44:17.135514601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:44:17.139714 containerd[1470]: time="2025-05-15T09:44:17.136211415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:44:17.160756 systemd[1]: Started cri-containerd-14ded81114056717317bfef4bf50b913564630d63289e3d4f54f6cba4ceaacaa.scope - libcontainer container 14ded81114056717317bfef4bf50b913564630d63289e3d4f54f6cba4ceaacaa. May 15 09:44:17.162393 systemd[1]: Started cri-containerd-43a32e089e3f001c77c7e6ad93a4b23e1c6990c7c3f21ce604437c1df37d92a6.scope - libcontainer container 43a32e089e3f001c77c7e6ad93a4b23e1c6990c7c3f21ce604437c1df37d92a6. May 15 09:44:17.173506 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:44:17.175051 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:44:17.190832 containerd[1470]: time="2025-05-15T09:44:17.190392068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h4jh7,Uid:a81dae4a-07da-4431-84f2-04fdefb4d1fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"43a32e089e3f001c77c7e6ad93a4b23e1c6990c7c3f21ce604437c1df37d92a6\"" May 15 09:44:17.191376 kubelet[2543]: E0515 09:44:17.191135 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:17.192952 containerd[1470]: time="2025-05-15T09:44:17.192865038Z" level=info msg="CreateContainer within sandbox \"43a32e089e3f001c77c7e6ad93a4b23e1c6990c7c3f21ce604437c1df37d92a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:44:17.199331 containerd[1470]: time="2025-05-15T09:44:17.199113844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vj8cp,Uid:04d2cb3f-b1c9-43b4-9ed2-ccac6f759d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"14ded81114056717317bfef4bf50b913564630d63289e3d4f54f6cba4ceaacaa\"" May 15 09:44:17.200216 kubelet[2543]: E0515 09:44:17.200132 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:17.204263 containerd[1470]: time="2025-05-15T09:44:17.204208987Z" level=info msg="CreateContainer within sandbox \"14ded81114056717317bfef4bf50b913564630d63289e3d4f54f6cba4ceaacaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:44:17.207165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952945349.mount: Deactivated successfully. May 15 09:44:17.210171 containerd[1470]: time="2025-05-15T09:44:17.210132626Z" level=info msg="CreateContainer within sandbox \"43a32e089e3f001c77c7e6ad93a4b23e1c6990c7c3f21ce604437c1df37d92a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cff118ef9c66e7d37751fe90bfb6c3fc7487e037dbb56666b0cf77004a6852cf\"" May 15 09:44:17.211880 containerd[1470]: time="2025-05-15T09:44:17.211791300Z" level=info msg="StartContainer for \"cff118ef9c66e7d37751fe90bfb6c3fc7487e037dbb56666b0cf77004a6852cf\"" May 15 09:44:17.222960 containerd[1470]: time="2025-05-15T09:44:17.222902284Z" level=info msg="CreateContainer within sandbox \"14ded81114056717317bfef4bf50b913564630d63289e3d4f54f6cba4ceaacaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ab40c178dc7478964fd4ef97148b485aaec8bba935eb2935029f0673ad18081\"" May 15 09:44:17.224763 containerd[1470]: time="2025-05-15T09:44:17.224728681Z" level=info msg="StartContainer for \"5ab40c178dc7478964fd4ef97148b485aaec8bba935eb2935029f0673ad18081\"" May 15 09:44:17.238796 systemd[1]: Started cri-containerd-cff118ef9c66e7d37751fe90bfb6c3fc7487e037dbb56666b0cf77004a6852cf.scope - libcontainer container cff118ef9c66e7d37751fe90bfb6c3fc7487e037dbb56666b0cf77004a6852cf. May 15 09:44:17.245398 systemd[1]: Started cri-containerd-5ab40c178dc7478964fd4ef97148b485aaec8bba935eb2935029f0673ad18081.scope - libcontainer container 5ab40c178dc7478964fd4ef97148b485aaec8bba935eb2935029f0673ad18081. May 15 09:44:17.269961 containerd[1470]: time="2025-05-15T09:44:17.267623786Z" level=info msg="StartContainer for \"cff118ef9c66e7d37751fe90bfb6c3fc7487e037dbb56666b0cf77004a6852cf\" returns successfully" May 15 09:44:17.272966 containerd[1470]: time="2025-05-15T09:44:17.272916693Z" level=info msg="StartContainer for \"5ab40c178dc7478964fd4ef97148b485aaec8bba935eb2935029f0673ad18081\" returns successfully" May 15 09:44:17.972687 kubelet[2543]: E0515 09:44:17.972552 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:17.977202 kubelet[2543]: E0515 09:44:17.976918 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:17.999434 kubelet[2543]: I0515 09:44:17.998346 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h4jh7" podStartSLOduration=19.998331724 podStartE2EDuration="19.998331724s" podCreationTimestamp="2025-05-15 09:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:44:17.983927154 +0000 UTC m=+25.178931340" watchObservedRunningTime="2025-05-15 09:44:17.998331724 +0000 UTC m=+25.193335910" May 15 09:44:18.022528 kubelet[2543]: I0515 09:44:18.022480 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vj8cp" podStartSLOduration=20.022456754 podStartE2EDuration="20.022456754s" podCreationTimestamp="2025-05-15 09:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:44:18.010568204 +0000 UTC m=+25.205572390" watchObservedRunningTime="2025-05-15 09:44:18.022456754 +0000 UTC m=+25.217460940" May 15 09:44:18.978226 kubelet[2543]: E0515 09:44:18.977808 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:18.979022 kubelet[2543]: E0515 09:44:18.978955 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:19.461272 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). May 15 09:44:19.508456 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:19.509848 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:19.513614 systemd-logind[1451]: New session 8 of user core. May 15 09:44:19.519724 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 09:44:19.644543 sshd[3959]: Connection closed by 10.0.0.1 port 56430 May 15 09:44:19.645117 sshd-session[3957]: pam_unix(sshd:session): session closed for user core May 15 09:44:19.648265 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:56430.service: Deactivated successfully. May 15 09:44:19.649945 systemd[1]: session-8.scope: Deactivated successfully. May 15 09:44:19.650523 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. May 15 09:44:19.651380 systemd-logind[1451]: Removed session 8. May 15 09:44:19.980202 kubelet[2543]: E0515 09:44:19.979843 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:44:24.654092 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:37980.service - OpenSSH per-connection server daemon (10.0.0.1:37980). May 15 09:44:24.696857 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 37980 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:24.698057 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:24.701923 systemd-logind[1451]: New session 9 of user core. May 15 09:44:24.709721 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 09:44:24.821605 sshd[3974]: Connection closed by 10.0.0.1 port 37980 May 15 09:44:24.822232 sshd-session[3972]: pam_unix(sshd:session): session closed for user core May 15 09:44:24.825386 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:37980.service: Deactivated successfully. May 15 09:44:24.827013 systemd[1]: session-9.scope: Deactivated successfully. May 15 09:44:24.827616 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. May 15 09:44:24.828517 systemd-logind[1451]: Removed session 9. May 15 09:44:29.838118 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:37982.service - OpenSSH per-connection server daemon (10.0.0.1:37982). May 15 09:44:29.880807 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 37982 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:29.881980 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:29.885810 systemd-logind[1451]: New session 10 of user core. May 15 09:44:29.898747 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 09:44:30.008914 sshd[3992]: Connection closed by 10.0.0.1 port 37982 May 15 09:44:30.010303 sshd-session[3990]: pam_unix(sshd:session): session closed for user core May 15 09:44:30.026059 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:37982.service: Deactivated successfully. May 15 09:44:30.027403 systemd[1]: session-10.scope: Deactivated successfully. May 15 09:44:30.028634 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. May 15 09:44:30.034850 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). May 15 09:44:30.035885 systemd-logind[1451]: Removed session 10. May 15 09:44:30.074346 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:30.075651 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:30.082300 systemd-logind[1451]: New session 11 of user core. May 15 09:44:30.092800 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 09:44:30.237174 sshd[4008]: Connection closed by 10.0.0.1 port 37990 May 15 09:44:30.237692 sshd-session[4006]: pam_unix(sshd:session): session closed for user core May 15 09:44:30.254122 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:37990.service: Deactivated successfully. May 15 09:44:30.256630 systemd[1]: session-11.scope: Deactivated successfully. May 15 09:44:30.259705 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. May 15 09:44:30.266847 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:37992.service - OpenSSH per-connection server daemon (10.0.0.1:37992). May 15 09:44:30.268818 systemd-logind[1451]: Removed session 11. May 15 09:44:30.309904 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 37992 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:30.311142 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:30.315098 systemd-logind[1451]: New session 12 of user core. May 15 09:44:30.323728 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 09:44:30.434839 sshd[4021]: Connection closed by 10.0.0.1 port 37992 May 15 09:44:30.435962 sshd-session[4019]: pam_unix(sshd:session): session closed for user core May 15 09:44:30.440473 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:37992.service: Deactivated successfully. May 15 09:44:30.442210 systemd[1]: session-12.scope: Deactivated successfully. May 15 09:44:30.443317 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. May 15 09:44:30.444399 systemd-logind[1451]: Removed session 12. May 15 09:44:35.461808 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). May 15 09:44:35.500304 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:35.501544 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:35.505541 systemd-logind[1451]: New session 13 of user core. May 15 09:44:35.511734 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 09:44:35.622139 sshd[4036]: Connection closed by 10.0.0.1 port 54096 May 15 09:44:35.622482 sshd-session[4034]: pam_unix(sshd:session): session closed for user core May 15 09:44:35.626251 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:54096.service: Deactivated successfully. May 15 09:44:35.629094 systemd[1]: session-13.scope: Deactivated successfully. May 15 09:44:35.629762 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. May 15 09:44:35.630491 systemd-logind[1451]: Removed session 13. May 15 09:44:40.635253 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:54106.service - OpenSSH per-connection server daemon (10.0.0.1:54106). May 15 09:44:40.679578 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 54106 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:40.680684 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:40.683816 systemd-logind[1451]: New session 14 of user core. May 15 09:44:40.692719 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 09:44:40.799666 sshd[4050]: Connection closed by 10.0.0.1 port 54106 May 15 09:44:40.800237 sshd-session[4048]: pam_unix(sshd:session): session closed for user core May 15 09:44:40.812651 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:54106.service: Deactivated successfully. May 15 09:44:40.814383 systemd[1]: session-14.scope: Deactivated successfully. May 15 09:44:40.815554 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. May 15 09:44:40.825871 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:54118.service - OpenSSH per-connection server daemon (10.0.0.1:54118). May 15 09:44:40.829928 systemd-logind[1451]: Removed session 14. May 15 09:44:40.865736 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 54118 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:40.866831 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:40.870291 systemd-logind[1451]: New session 15 of user core. May 15 09:44:40.880711 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 09:44:41.074583 sshd[4064]: Connection closed by 10.0.0.1 port 54118 May 15 09:44:41.076389 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 15 09:44:41.085007 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:54118.service: Deactivated successfully. May 15 09:44:41.086460 systemd[1]: session-15.scope: Deactivated successfully. May 15 09:44:41.087723 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. May 15 09:44:41.088885 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:54120.service - OpenSSH per-connection server daemon (10.0.0.1:54120). May 15 09:44:41.089502 systemd-logind[1451]: Removed session 15. May 15 09:44:41.135092 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 54120 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:41.136207 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:41.139718 systemd-logind[1451]: New session 16 of user core. May 15 09:44:41.148769 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 09:44:42.400665 sshd[4076]: Connection closed by 10.0.0.1 port 54120 May 15 09:44:42.401070 sshd-session[4074]: pam_unix(sshd:session): session closed for user core May 15 09:44:42.411043 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:54120.service: Deactivated successfully. May 15 09:44:42.414717 systemd[1]: session-16.scope: Deactivated successfully. May 15 09:44:42.419501 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. May 15 09:44:42.428971 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:54132.service - OpenSSH per-connection server daemon (10.0.0.1:54132). May 15 09:44:42.430083 systemd-logind[1451]: Removed session 16. May 15 09:44:42.472137 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 54132 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:42.473554 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:42.476977 systemd-logind[1451]: New session 17 of user core. May 15 09:44:42.482709 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 09:44:42.696508 sshd[4098]: Connection closed by 10.0.0.1 port 54132 May 15 09:44:42.697148 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 15 09:44:42.706285 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:54132.service: Deactivated successfully. May 15 09:44:42.709112 systemd[1]: session-17.scope: Deactivated successfully. May 15 09:44:42.712030 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. May 15 09:44:42.730996 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:58460.service - OpenSSH per-connection server daemon (10.0.0.1:58460). May 15 09:44:42.732985 systemd-logind[1451]: Removed session 17. May 15 09:44:42.770887 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 58460 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:42.772295 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:42.776051 systemd-logind[1451]: New session 18 of user core. May 15 09:44:42.781746 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 09:44:42.892828 sshd[4111]: Connection closed by 10.0.0.1 port 58460 May 15 09:44:42.893886 sshd-session[4109]: pam_unix(sshd:session): session closed for user core May 15 09:44:42.898181 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. May 15 09:44:42.898496 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:58460.service: Deactivated successfully. May 15 09:44:42.900159 systemd[1]: session-18.scope: Deactivated successfully. May 15 09:44:42.900833 systemd-logind[1451]: Removed session 18. May 15 09:44:47.908031 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). May 15 09:44:47.950250 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:47.951350 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:47.954496 systemd-logind[1451]: New session 19 of user core. May 15 09:44:47.966700 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 09:44:48.074568 sshd[4131]: Connection closed by 10.0.0.1 port 58472 May 15 09:44:48.074882 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 15 09:44:48.077875 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:58472.service: Deactivated successfully. May 15 09:44:48.079503 systemd[1]: session-19.scope: Deactivated successfully. May 15 09:44:48.082194 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. May 15 09:44:48.083000 systemd-logind[1451]: Removed session 19. May 15 09:44:53.085015 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). May 15 09:44:53.128175 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:53.129350 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:53.132956 systemd-logind[1451]: New session 20 of user core. May 15 09:44:53.142778 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 09:44:53.247313 sshd[4148]: Connection closed by 10.0.0.1 port 37200 May 15 09:44:53.247933 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 15 09:44:53.250892 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:37200.service: Deactivated successfully. May 15 09:44:53.253119 systemd[1]: session-20.scope: Deactivated successfully. May 15 09:44:53.255221 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. May 15 09:44:53.256039 systemd-logind[1451]: Removed session 20. May 15 09:44:58.258104 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:37210.service - OpenSSH per-connection server daemon (10.0.0.1:37210). May 15 09:44:58.300392 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 37210 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:58.301542 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:58.304973 systemd-logind[1451]: New session 21 of user core. May 15 09:44:58.318721 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 09:44:58.424622 sshd[4162]: Connection closed by 10.0.0.1 port 37210 May 15 09:44:58.424942 sshd-session[4160]: pam_unix(sshd:session): session closed for user core May 15 09:44:58.437027 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:37210.service: Deactivated successfully. May 15 09:44:58.438437 systemd[1]: session-21.scope: Deactivated successfully. May 15 09:44:58.439632 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. May 15 09:44:58.447843 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:37220.service - OpenSSH per-connection server daemon (10.0.0.1:37220). May 15 09:44:58.449238 systemd-logind[1451]: Removed session 21. May 15 09:44:58.487717 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 37220 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:44:58.488834 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:44:58.492240 systemd-logind[1451]: New session 22 of user core. May 15 09:44:58.504697 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 09:45:00.625687 containerd[1470]: time="2025-05-15T09:45:00.625480412Z" level=info msg="StopContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" with timeout 30 (s)" May 15 09:45:00.627408 containerd[1470]: time="2025-05-15T09:45:00.626214356Z" level=info msg="Stop container \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" with signal terminated" May 15 09:45:00.641243 systemd[1]: cri-containerd-ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a.scope: Deactivated successfully. May 15 09:45:00.665455 containerd[1470]: time="2025-05-15T09:45:00.665401749Z" level=info msg="StopContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" with timeout 2 (s)" May 15 09:45:00.665928 containerd[1470]: time="2025-05-15T09:45:00.665903898Z" level=info msg="Stop container \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" with signal terminated" May 15 09:45:00.666732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a-rootfs.mount: Deactivated successfully. May 15 09:45:00.672055 containerd[1470]: time="2025-05-15T09:45:00.667626381Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:45:00.672794 systemd-networkd[1387]: lxc_health: Link DOWN May 15 09:45:00.672799 systemd-networkd[1387]: lxc_health: Lost carrier May 15 09:45:00.677116 containerd[1470]: time="2025-05-15T09:45:00.676913820Z" level=info msg="shim disconnected" id=ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a namespace=k8s.io May 15 09:45:00.677116 containerd[1470]: time="2025-05-15T09:45:00.676965099Z" level=warning msg="cleaning up after shim disconnected" id=ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a namespace=k8s.io May 15 09:45:00.677116 containerd[1470]: time="2025-05-15T09:45:00.676973219Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:00.694943 systemd[1]: cri-containerd-47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8.scope: Deactivated successfully. May 15 09:45:00.695200 systemd[1]: cri-containerd-47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8.scope: Consumed 6.304s CPU time. May 15 09:45:00.734023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8-rootfs.mount: Deactivated successfully. May 15 09:45:00.740247 containerd[1470]: time="2025-05-15T09:45:00.740189172Z" level=info msg="shim disconnected" id=47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8 namespace=k8s.io May 15 09:45:00.740247 containerd[1470]: time="2025-05-15T09:45:00.740244370Z" level=warning msg="cleaning up after shim disconnected" id=47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8 namespace=k8s.io May 15 09:45:00.740247 containerd[1470]: time="2025-05-15T09:45:00.740252810Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:00.741146 containerd[1470]: time="2025-05-15T09:45:00.741119511Z" level=info msg="StopContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" returns successfully" May 15 09:45:00.743416 containerd[1470]: time="2025-05-15T09:45:00.743381983Z" level=info msg="StopPodSandbox for \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\"" May 15 09:45:00.745559 containerd[1470]: time="2025-05-15T09:45:00.745524736Z" level=info msg="Container to stop \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.747105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e-shm.mount: Deactivated successfully. May 15 09:45:00.753316 systemd[1]: cri-containerd-9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e.scope: Deactivated successfully. May 15 09:45:00.754945 containerd[1470]: time="2025-05-15T09:45:00.754910693Z" level=info msg="StopContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" returns successfully" May 15 09:45:00.755988 containerd[1470]: time="2025-05-15T09:45:00.755958551Z" level=info msg="StopPodSandbox for \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\"" May 15 09:45:00.756206 containerd[1470]: time="2025-05-15T09:45:00.756107067Z" level=info msg="Container to stop \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.756206 containerd[1470]: time="2025-05-15T09:45:00.756131627Z" level=info msg="Container to stop \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.756499 containerd[1470]: time="2025-05-15T09:45:00.756462580Z" level=info msg="Container to stop \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.756499 containerd[1470]: time="2025-05-15T09:45:00.756486259Z" level=info msg="Container to stop \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.756499 containerd[1470]: time="2025-05-15T09:45:00.756495779Z" level=info msg="Container to stop \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:45:00.758783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190-shm.mount: Deactivated successfully. May 15 09:45:00.762742 systemd[1]: cri-containerd-76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190.scope: Deactivated successfully. May 15 09:45:00.777246 containerd[1470]: time="2025-05-15T09:45:00.777191571Z" level=info msg="shim disconnected" id=9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e namespace=k8s.io May 15 09:45:00.777641 containerd[1470]: time="2025-05-15T09:45:00.777617162Z" level=warning msg="cleaning up after shim disconnected" id=9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e namespace=k8s.io May 15 09:45:00.777722 containerd[1470]: time="2025-05-15T09:45:00.777707960Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:00.790777 containerd[1470]: time="2025-05-15T09:45:00.790337727Z" level=info msg="shim disconnected" id=76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190 namespace=k8s.io May 15 09:45:00.790777 containerd[1470]: time="2025-05-15T09:45:00.790386406Z" level=warning msg="cleaning up after shim disconnected" id=76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190 namespace=k8s.io May 15 09:45:00.790777 containerd[1470]: time="2025-05-15T09:45:00.790393846Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:00.796898 containerd[1470]: time="2025-05-15T09:45:00.796754268Z" level=info msg="TearDown network for sandbox \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\" successfully" May 15 09:45:00.796898 containerd[1470]: time="2025-05-15T09:45:00.796786828Z" level=info msg="StopPodSandbox for \"9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e\" returns successfully" May 15 09:45:00.813045 containerd[1470]: time="2025-05-15T09:45:00.812995517Z" level=info msg="TearDown network for sandbox \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" successfully" May 15 09:45:00.813045 containerd[1470]: time="2025-05-15T09:45:00.813037036Z" level=info msg="StopPodSandbox for \"76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190\" returns successfully" May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.913966 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cni-path\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.914011 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02528fde-a7fa-4469-a65d-322d4dc5bffd-cilium-config-path\") pod \"02528fde-a7fa-4469-a65d-322d4dc5bffd\" (UID: \"02528fde-a7fa-4469-a65d-322d4dc5bffd\") " May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.914041 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hubble-tls\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.914058 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqblg\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-kube-api-access-lqblg\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.914074 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-cgroup\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914093 kubelet[2543]: I0515 09:45:00.914090 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-xtables-lock\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914106 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-config-path\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914122 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvdp\" (UniqueName: \"kubernetes.io/projected/02528fde-a7fa-4469-a65d-322d4dc5bffd-kube-api-access-sjvdp\") pod \"02528fde-a7fa-4469-a65d-322d4dc5bffd\" (UID: \"02528fde-a7fa-4469-a65d-322d4dc5bffd\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914136 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hostproc\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914150 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-lib-modules\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914165 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-bpf-maps\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914527 kubelet[2543]: I0515 09:45:00.914179 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-etc-cni-netd\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914674 kubelet[2543]: I0515 09:45:00.914194 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-run\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914674 kubelet[2543]: I0515 09:45:00.914209 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-kernel\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914674 kubelet[2543]: I0515 09:45:00.914226 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-clustermesh-secrets\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.914674 kubelet[2543]: I0515 09:45:00.914241 2543 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-net\") pod \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\" (UID: \"c41ced47-fd5a-4e63-a558-e0c2cd3beb89\") " May 15 09:45:00.918234 kubelet[2543]: I0515 09:45:00.918203 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.918311 kubelet[2543]: I0515 09:45:00.918203 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cni-path" (OuterVolumeSpecName: "cni-path") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.918311 kubelet[2543]: I0515 09:45:00.918272 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.918311 kubelet[2543]: I0515 09:45:00.918287 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.918797 kubelet[2543]: I0515 09:45:00.918768 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.919644 kubelet[2543]: I0515 09:45:00.918875 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hostproc" (OuterVolumeSpecName: "hostproc") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.919644 kubelet[2543]: I0515 09:45:00.918897 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.919644 kubelet[2543]: I0515 09:45:00.918911 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.920284 kubelet[2543]: I0515 09:45:00.920250 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.920415 kubelet[2543]: I0515 09:45:00.920389 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:45:00.920495 kubelet[2543]: I0515 09:45:00.920471 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02528fde-a7fa-4469-a65d-322d4dc5bffd-kube-api-access-sjvdp" (OuterVolumeSpecName: "kube-api-access-sjvdp") pod "02528fde-a7fa-4469-a65d-322d4dc5bffd" (UID: "02528fde-a7fa-4469-a65d-322d4dc5bffd"). InnerVolumeSpecName "kube-api-access-sjvdp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:45:00.921508 kubelet[2543]: I0515 09:45:00.921467 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02528fde-a7fa-4469-a65d-322d4dc5bffd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02528fde-a7fa-4469-a65d-322d4dc5bffd" (UID: "02528fde-a7fa-4469-a65d-322d4dc5bffd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 09:45:00.922226 kubelet[2543]: I0515 09:45:00.922176 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 09:45:00.923472 kubelet[2543]: I0515 09:45:00.923438 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-kube-api-access-lqblg" (OuterVolumeSpecName: "kube-api-access-lqblg") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "kube-api-access-lqblg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:45:00.923744 kubelet[2543]: I0515 09:45:00.923715 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:45:00.924674 kubelet[2543]: I0515 09:45:00.922566 2543 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c41ced47-fd5a-4e63-a558-e0c2cd3beb89" (UID: "c41ced47-fd5a-4e63-a558-e0c2cd3beb89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 09:45:01.014784 kubelet[2543]: I0515 09:45:01.014733 2543 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sjvdp\" (UniqueName: \"kubernetes.io/projected/02528fde-a7fa-4469-a65d-322d4dc5bffd-kube-api-access-sjvdp\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014784 kubelet[2543]: I0515 09:45:01.014775 2543 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014784 kubelet[2543]: I0515 09:45:01.014785 2543 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014784 kubelet[2543]: I0515 09:45:01.014795 2543 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014803 2543 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014810 2543 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014818 2543 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014825 2543 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014833 2543 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014840 2543 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014847 2543 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.014991 kubelet[2543]: I0515 09:45:01.014855 2543 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02528fde-a7fa-4469-a65d-322d4dc5bffd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.015149 kubelet[2543]: I0515 09:45:01.014863 2543 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.015149 kubelet[2543]: I0515 09:45:01.014871 2543 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lqblg\" (UniqueName: \"kubernetes.io/projected/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-kube-api-access-lqblg\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.015149 kubelet[2543]: I0515 09:45:01.014878 2543 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.015149 kubelet[2543]: I0515 09:45:01.014886 2543 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41ced47-fd5a-4e63-a558-e0c2cd3beb89-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 09:45:01.078831 kubelet[2543]: I0515 09:45:01.078774 2543 scope.go:117] "RemoveContainer" containerID="ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a" May 15 09:45:01.080035 containerd[1470]: time="2025-05-15T09:45:01.079768971Z" level=info msg="RemoveContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\"" May 15 09:45:01.083025 systemd[1]: Removed slice kubepods-besteffort-pod02528fde_a7fa_4469_a65d_322d4dc5bffd.slice - libcontainer container kubepods-besteffort-pod02528fde_a7fa_4469_a65d_322d4dc5bffd.slice. May 15 09:45:01.086518 containerd[1470]: time="2025-05-15T09:45:01.085326736Z" level=info msg="RemoveContainer for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" returns successfully" May 15 09:45:01.086518 containerd[1470]: time="2025-05-15T09:45:01.085690968Z" level=error msg="ContainerStatus for \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\": not found" May 15 09:45:01.085663 systemd[1]: Removed slice kubepods-burstable-podc41ced47_fd5a_4e63_a558_e0c2cd3beb89.slice - libcontainer container kubepods-burstable-podc41ced47_fd5a_4e63_a558_e0c2cd3beb89.slice. May 15 09:45:01.086667 kubelet[2543]: I0515 09:45:01.085510 2543 scope.go:117] "RemoveContainer" containerID="ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a" May 15 09:45:01.085739 systemd[1]: kubepods-burstable-podc41ced47_fd5a_4e63_a558_e0c2cd3beb89.slice: Consumed 6.437s CPU time. May 15 09:45:01.098052 kubelet[2543]: E0515 09:45:01.097833 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\": not found" containerID="ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a" May 15 09:45:01.098052 kubelet[2543]: I0515 09:45:01.097872 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a"} err="failed to get container status \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba695ec79a184c83f12613c1b5e2264dc3255c6e933a87e6a98b0e562428b72a\": not found" May 15 09:45:01.098052 kubelet[2543]: I0515 09:45:01.097951 2543 scope.go:117] "RemoveContainer" containerID="47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8" May 15 09:45:01.099067 containerd[1470]: time="2025-05-15T09:45:01.099041050Z" level=info msg="RemoveContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\"" May 15 09:45:01.102382 containerd[1470]: time="2025-05-15T09:45:01.101854992Z" level=info msg="RemoveContainer for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" returns successfully" May 15 09:45:01.102997 kubelet[2543]: I0515 09:45:01.102695 2543 scope.go:117] "RemoveContainer" containerID="fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c" May 15 09:45:01.103566 containerd[1470]: time="2025-05-15T09:45:01.103500997Z" level=info msg="RemoveContainer for \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\"" May 15 09:45:01.105591 containerd[1470]: time="2025-05-15T09:45:01.105549835Z" level=info msg="RemoveContainer for \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\" returns successfully" May 15 09:45:01.105840 kubelet[2543]: I0515 09:45:01.105752 2543 scope.go:117] "RemoveContainer" containerID="0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300" May 15 09:45:01.107026 containerd[1470]: time="2025-05-15T09:45:01.106804408Z" level=info msg="RemoveContainer for \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\"" May 15 09:45:01.109044 containerd[1470]: time="2025-05-15T09:45:01.108967243Z" level=info msg="RemoveContainer for \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\" returns successfully" May 15 09:45:01.109172 kubelet[2543]: I0515 09:45:01.109099 2543 scope.go:117] "RemoveContainer" containerID="720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75" May 15 09:45:01.110252 containerd[1470]: time="2025-05-15T09:45:01.110036541Z" level=info msg="RemoveContainer for \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\"" May 15 09:45:01.112073 containerd[1470]: time="2025-05-15T09:45:01.112045579Z" level=info msg="RemoveContainer for \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\" returns successfully" May 15 09:45:01.112296 kubelet[2543]: I0515 09:45:01.112277 2543 scope.go:117] "RemoveContainer" containerID="0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c" May 15 09:45:01.113333 containerd[1470]: time="2025-05-15T09:45:01.113130517Z" level=info msg="RemoveContainer for \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\"" May 15 09:45:01.118430 containerd[1470]: time="2025-05-15T09:45:01.118340968Z" level=info msg="RemoveContainer for \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\" returns successfully" May 15 09:45:01.118657 kubelet[2543]: I0515 09:45:01.118508 2543 scope.go:117] "RemoveContainer" containerID="47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8" May 15 09:45:01.118747 containerd[1470]: time="2025-05-15T09:45:01.118706001Z" level=error msg="ContainerStatus for \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\": not found" May 15 09:45:01.118861 kubelet[2543]: E0515 09:45:01.118838 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\": not found" containerID="47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8" May 15 09:45:01.118925 kubelet[2543]: I0515 09:45:01.118869 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8"} err="failed to get container status \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"47a084ff3a5525e44fd9eec3b7fd880d5059bd230996b396382b09d382ccc6d8\": not found" May 15 09:45:01.118925 kubelet[2543]: I0515 09:45:01.118891 2543 scope.go:117] "RemoveContainer" containerID="fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c" May 15 09:45:01.119083 containerd[1470]: time="2025-05-15T09:45:01.119051793Z" level=error msg="ContainerStatus for \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\": not found" May 15 09:45:01.119475 kubelet[2543]: E0515 09:45:01.119189 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\": not found" containerID="fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c" May 15 09:45:01.119475 kubelet[2543]: I0515 09:45:01.119217 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c"} err="failed to get container status \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fefcd6b677732d9b1e09707a5c90eef5ef1b6506a6f16b4aa544c1ac4f9ece7c\": not found" May 15 09:45:01.119475 kubelet[2543]: I0515 09:45:01.119234 2543 scope.go:117] "RemoveContainer" containerID="0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300" May 15 09:45:01.119616 containerd[1470]: time="2025-05-15T09:45:01.119408466Z" level=error msg="ContainerStatus for \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\": not found" May 15 09:45:01.119701 kubelet[2543]: E0515 09:45:01.119522 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\": not found" containerID="0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300" May 15 09:45:01.119701 kubelet[2543]: I0515 09:45:01.119543 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300"} err="failed to get container status \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\": rpc error: code = NotFound desc = an error occurred when try to find container \"0377cbb2baf9f6a51e707a61ab111e4eba5706d99247017621f8807413e51300\": not found" May 15 09:45:01.119701 kubelet[2543]: I0515 09:45:01.119560 2543 scope.go:117] "RemoveContainer" containerID="720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75" May 15 09:45:01.119763 containerd[1470]: time="2025-05-15T09:45:01.119724099Z" level=error msg="ContainerStatus for \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\": not found" May 15 09:45:01.119887 kubelet[2543]: E0515 09:45:01.119825 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\": not found" containerID="720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75" May 15 09:45:01.119887 kubelet[2543]: I0515 09:45:01.119855 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75"} err="failed to get container status \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\": rpc error: code = NotFound desc = an error occurred when try to find container \"720ebb1aac810dda53988d7c24f4d29a258d7e7170414776d2bfa84e8e396c75\": not found" May 15 09:45:01.119887 kubelet[2543]: I0515 09:45:01.119871 2543 scope.go:117] "RemoveContainer" containerID="0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c" May 15 09:45:01.120076 containerd[1470]: time="2025-05-15T09:45:01.120029573Z" level=error msg="ContainerStatus for \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\": not found" May 15 09:45:01.120141 kubelet[2543]: E0515 09:45:01.120121 2543 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\": not found" containerID="0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c" May 15 09:45:01.120170 kubelet[2543]: I0515 09:45:01.120147 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c"} err="failed to get container status \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ee9fbf3e78eab71898ebada863fd79439c70eb206abf9b91656d4d39c18645c\": not found" May 15 09:45:01.639127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a7bfc02b532897c80d3c345fc20de5c6d3e5a8ec878d1bf9aa3441799d0ef4e-rootfs.mount: Deactivated successfully. May 15 09:45:01.639229 systemd[1]: var-lib-kubelet-pods-02528fde\x2da7fa\x2d4469\x2da65d\x2d322d4dc5bffd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjvdp.mount: Deactivated successfully. May 15 09:45:01.639290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76b9b00ba34c6120a10f087d51dec24e6f999f90a978bf4126ee33804fe35190-rootfs.mount: Deactivated successfully. May 15 09:45:01.639354 systemd[1]: var-lib-kubelet-pods-c41ced47\x2dfd5a\x2d4e63\x2da558\x2de0c2cd3beb89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqblg.mount: Deactivated successfully. May 15 09:45:01.639411 systemd[1]: var-lib-kubelet-pods-c41ced47\x2dfd5a\x2d4e63\x2da558\x2de0c2cd3beb89-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 09:45:01.639458 systemd[1]: var-lib-kubelet-pods-c41ced47\x2dfd5a\x2d4e63\x2da558\x2de0c2cd3beb89-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 09:45:01.887401 kubelet[2543]: E0515 09:45:01.887359 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:02.591994 sshd[4176]: Connection closed by 10.0.0.1 port 37220 May 15 09:45:02.592492 sshd-session[4174]: pam_unix(sshd:session): session closed for user core May 15 09:45:02.603160 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:37220.service: Deactivated successfully. May 15 09:45:02.604622 systemd[1]: session-22.scope: Deactivated successfully. May 15 09:45:02.604788 systemd[1]: session-22.scope: Consumed 1.459s CPU time. May 15 09:45:02.606142 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. May 15 09:45:02.625871 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:60804.service - OpenSSH per-connection server daemon (10.0.0.1:60804). May 15 09:45:02.626803 systemd-logind[1451]: Removed session 22. May 15 09:45:02.676935 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 60804 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:45:02.678088 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:45:02.682155 systemd-logind[1451]: New session 23 of user core. May 15 09:45:02.688793 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 09:45:02.889583 kubelet[2543]: I0515 09:45:02.889525 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02528fde-a7fa-4469-a65d-322d4dc5bffd" path="/var/lib/kubelet/pods/02528fde-a7fa-4469-a65d-322d4dc5bffd/volumes" May 15 09:45:02.889966 kubelet[2543]: I0515 09:45:02.889943 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" path="/var/lib/kubelet/pods/c41ced47-fd5a-4e63-a558-e0c2cd3beb89/volumes" May 15 09:45:02.937615 kubelet[2543]: E0515 09:45:02.937412 2543 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:45:03.747476 sshd[4339]: Connection closed by 10.0.0.1 port 60804 May 15 09:45:03.745387 sshd-session[4337]: pam_unix(sshd:session): session closed for user core May 15 09:45:03.755544 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:60804.service: Deactivated successfully. May 15 09:45:03.760424 systemd[1]: session-23.scope: Deactivated successfully. May 15 09:45:03.762807 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. May 15 09:45:03.764950 kubelet[2543]: E0515 09:45:03.764917 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="mount-cgroup" May 15 09:45:03.764950 kubelet[2543]: E0515 09:45:03.764944 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="mount-bpf-fs" May 15 09:45:03.764950 kubelet[2543]: E0515 09:45:03.764951 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="apply-sysctl-overwrites" May 15 09:45:03.765073 kubelet[2543]: E0515 09:45:03.764959 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02528fde-a7fa-4469-a65d-322d4dc5bffd" containerName="cilium-operator" May 15 09:45:03.765073 kubelet[2543]: E0515 09:45:03.764965 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="clean-cilium-state" May 15 09:45:03.765073 kubelet[2543]: E0515 09:45:03.764971 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="cilium-agent" May 15 09:45:03.765073 kubelet[2543]: I0515 09:45:03.764996 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="02528fde-a7fa-4469-a65d-322d4dc5bffd" containerName="cilium-operator" May 15 09:45:03.765073 kubelet[2543]: I0515 09:45:03.765002 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41ced47-fd5a-4e63-a558-e0c2cd3beb89" containerName="cilium-agent" May 15 09:45:03.772981 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:60814.service - OpenSSH per-connection server daemon (10.0.0.1:60814). May 15 09:45:03.780639 systemd-logind[1451]: Removed session 23. May 15 09:45:03.787595 systemd[1]: Created slice kubepods-burstable-poda7eb887d_9c1c_4a40_a460_086bb77deb40.slice - libcontainer container kubepods-burstable-poda7eb887d_9c1c_4a40_a460_086bb77deb40.slice. May 15 09:45:03.825481 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 60814 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:45:03.826672 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:45:03.830201 systemd-logind[1451]: New session 24 of user core. May 15 09:45:03.832412 kubelet[2543]: I0515 09:45:03.832375 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-host-proc-sys-net\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832467 kubelet[2543]: I0515 09:45:03.832416 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-cilium-cgroup\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832467 kubelet[2543]: I0515 09:45:03.832437 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-xtables-lock\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832467 kubelet[2543]: I0515 09:45:03.832453 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7eb887d-9c1c-4a40-a460-086bb77deb40-clustermesh-secrets\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832470 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7eb887d-9c1c-4a40-a460-086bb77deb40-cilium-ipsec-secrets\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832486 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7eb887d-9c1c-4a40-a460-086bb77deb40-hubble-tls\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832503 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-bpf-maps\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832519 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hvb\" (UniqueName: \"kubernetes.io/projected/a7eb887d-9c1c-4a40-a460-086bb77deb40-kube-api-access-79hvb\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832535 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-lib-modules\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832618 kubelet[2543]: I0515 09:45:03.832549 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-cilium-run\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832787 kubelet[2543]: I0515 09:45:03.832566 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-hostproc\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832787 kubelet[2543]: I0515 09:45:03.832628 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-cni-path\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832787 kubelet[2543]: I0515 09:45:03.832645 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-etc-cni-netd\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832787 kubelet[2543]: I0515 09:45:03.832663 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7eb887d-9c1c-4a40-a460-086bb77deb40-cilium-config-path\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.832787 kubelet[2543]: I0515 09:45:03.832678 2543 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7eb887d-9c1c-4a40-a460-086bb77deb40-host-proc-sys-kernel\") pod \"cilium-z5qkb\" (UID: \"a7eb887d-9c1c-4a40-a460-086bb77deb40\") " pod="kube-system/cilium-z5qkb" May 15 09:45:03.839710 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 09:45:03.888882 sshd[4352]: Connection closed by 10.0.0.1 port 60814 May 15 09:45:03.889181 sshd-session[4350]: pam_unix(sshd:session): session closed for user core May 15 09:45:03.901133 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:60814.service: Deactivated successfully. May 15 09:45:03.902826 systemd[1]: session-24.scope: Deactivated successfully. May 15 09:45:03.904164 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. May 15 09:45:03.905351 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:60818.service - OpenSSH per-connection server daemon (10.0.0.1:60818). May 15 09:45:03.906061 systemd-logind[1451]: Removed session 24. May 15 09:45:03.948618 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 60818 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:45:03.949657 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:45:03.952928 systemd-logind[1451]: New session 25 of user core. May 15 09:45:03.962708 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 09:45:04.083215 kubelet[2543]: I0515 09:45:04.083073 2543 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T09:45:04Z","lastTransitionTime":"2025-05-15T09:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 09:45:04.093165 kubelet[2543]: E0515 09:45:04.093122 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:04.097717 containerd[1470]: time="2025-05-15T09:45:04.097660404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5qkb,Uid:a7eb887d-9c1c-4a40-a460-086bb77deb40,Namespace:kube-system,Attempt:0,}" May 15 09:45:04.114987 containerd[1470]: time="2025-05-15T09:45:04.114293455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:45:04.114987 containerd[1470]: time="2025-05-15T09:45:04.114971322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:45:04.114987 containerd[1470]: time="2025-05-15T09:45:04.114984962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:45:04.115251 containerd[1470]: time="2025-05-15T09:45:04.115065400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:45:04.132761 systemd[1]: Started cri-containerd-6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb.scope - libcontainer container 6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb. May 15 09:45:04.156975 containerd[1470]: time="2025-05-15T09:45:04.156924463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5qkb,Uid:a7eb887d-9c1c-4a40-a460-086bb77deb40,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\"" May 15 09:45:04.158253 kubelet[2543]: E0515 09:45:04.157728 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:04.161241 containerd[1470]: time="2025-05-15T09:45:04.161202984Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:45:04.173832 containerd[1470]: time="2025-05-15T09:45:04.173713991Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a\"" May 15 09:45:04.174585 containerd[1470]: time="2025-05-15T09:45:04.174360859Z" level=info msg="StartContainer for \"c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a\"" May 15 09:45:04.198736 systemd[1]: Started cri-containerd-c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a.scope - libcontainer container c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a. May 15 09:45:04.217845 containerd[1470]: time="2025-05-15T09:45:04.217789013Z" level=info msg="StartContainer for \"c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a\" returns successfully" May 15 09:45:04.235225 systemd[1]: cri-containerd-c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a.scope: Deactivated successfully. May 15 09:45:04.263252 containerd[1470]: time="2025-05-15T09:45:04.263172490Z" level=info msg="shim disconnected" id=c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a namespace=k8s.io May 15 09:45:04.263252 containerd[1470]: time="2025-05-15T09:45:04.263234929Z" level=warning msg="cleaning up after shim disconnected" id=c1a4a5efbe048343c96985c610ebc1331762ce8e8957fc34eb8e8c0f4079561a namespace=k8s.io May 15 09:45:04.263252 containerd[1470]: time="2025-05-15T09:45:04.263243328Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:05.088255 kubelet[2543]: E0515 09:45:05.088213 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:05.090604 containerd[1470]: time="2025-05-15T09:45:05.090425029Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:45:05.101785 containerd[1470]: time="2025-05-15T09:45:05.101746866Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8\"" May 15 09:45:05.103179 containerd[1470]: time="2025-05-15T09:45:05.102203618Z" level=info msg="StartContainer for \"ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8\"" May 15 09:45:05.104723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131013539.mount: Deactivated successfully. May 15 09:45:05.127809 systemd[1]: Started cri-containerd-ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8.scope - libcontainer container ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8. May 15 09:45:05.145763 containerd[1470]: time="2025-05-15T09:45:05.145725120Z" level=info msg="StartContainer for \"ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8\" returns successfully" May 15 09:45:05.152242 systemd[1]: cri-containerd-ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8.scope: Deactivated successfully. May 15 09:45:05.169386 containerd[1470]: time="2025-05-15T09:45:05.169336858Z" level=info msg="shim disconnected" id=ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8 namespace=k8s.io May 15 09:45:05.169556 containerd[1470]: time="2025-05-15T09:45:05.169390937Z" level=warning msg="cleaning up after shim disconnected" id=ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8 namespace=k8s.io May 15 09:45:05.169556 containerd[1470]: time="2025-05-15T09:45:05.169399537Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:05.937377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9e87834bcd89b34b13d61f257935846055478a63be0d7ea1ebe7538cce5ad8-rootfs.mount: Deactivated successfully. May 15 09:45:06.092183 kubelet[2543]: E0515 09:45:06.092154 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:06.094906 containerd[1470]: time="2025-05-15T09:45:06.094830905Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:45:06.107424 containerd[1470]: time="2025-05-15T09:45:06.107365650Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1\"" May 15 09:45:06.107868 containerd[1470]: time="2025-05-15T09:45:06.107839641Z" level=info msg="StartContainer for \"36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1\"" May 15 09:45:06.127866 systemd[1]: run-containerd-runc-k8s.io-36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1-runc.tNQxq2.mount: Deactivated successfully. May 15 09:45:06.136747 systemd[1]: Started cri-containerd-36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1.scope - libcontainer container 36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1. May 15 09:45:06.160013 containerd[1470]: time="2025-05-15T09:45:06.159901427Z" level=info msg="StartContainer for \"36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1\" returns successfully" May 15 09:45:06.161312 systemd[1]: cri-containerd-36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1.scope: Deactivated successfully. May 15 09:45:06.182470 containerd[1470]: time="2025-05-15T09:45:06.182409680Z" level=info msg="shim disconnected" id=36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1 namespace=k8s.io May 15 09:45:06.182923 containerd[1470]: time="2025-05-15T09:45:06.182666755Z" level=warning msg="cleaning up after shim disconnected" id=36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1 namespace=k8s.io May 15 09:45:06.182923 containerd[1470]: time="2025-05-15T09:45:06.182682395Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:06.191918 containerd[1470]: time="2025-05-15T09:45:06.191818358Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:45:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 09:45:06.937593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f269f86202a246916c7e449d49acf8fe0fb6eaf390080f340ef3cecd198ae1-rootfs.mount: Deactivated successfully. May 15 09:45:07.096036 kubelet[2543]: E0515 09:45:07.095964 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:07.099527 containerd[1470]: time="2025-05-15T09:45:07.099493503Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:45:07.112616 containerd[1470]: time="2025-05-15T09:45:07.111177069Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869\"" May 15 09:45:07.113763 containerd[1470]: time="2025-05-15T09:45:07.113379593Z" level=info msg="StartContainer for \"0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869\"" May 15 09:45:07.141716 systemd[1]: Started cri-containerd-0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869.scope - libcontainer container 0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869. May 15 09:45:07.160488 systemd[1]: cri-containerd-0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869.scope: Deactivated successfully. May 15 09:45:07.161940 containerd[1470]: time="2025-05-15T09:45:07.161888911Z" level=info msg="StartContainer for \"0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869\" returns successfully" May 15 09:45:07.180037 containerd[1470]: time="2025-05-15T09:45:07.179979252Z" level=info msg="shim disconnected" id=0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869 namespace=k8s.io May 15 09:45:07.180037 containerd[1470]: time="2025-05-15T09:45:07.180030452Z" level=warning msg="cleaning up after shim disconnected" id=0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869 namespace=k8s.io May 15 09:45:07.180037 containerd[1470]: time="2025-05-15T09:45:07.180038971Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:45:07.937566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c985b599945733fb5e83a4c5807269978b9574459a7485dfab4fca18c1b2869-rootfs.mount: Deactivated successfully. May 15 09:45:07.938638 kubelet[2543]: E0515 09:45:07.938605 2543 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:45:08.101694 kubelet[2543]: E0515 09:45:08.101656 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:08.104936 containerd[1470]: time="2025-05-15T09:45:08.104872714Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:45:08.123623 containerd[1470]: time="2025-05-15T09:45:08.123241142Z" level=info msg="CreateContainer within sandbox \"6d32f4a3b6a186d3de638fe3f46c5ff79311820d73802c4c41250780eb8c1bfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d\"" May 15 09:45:08.125368 containerd[1470]: time="2025-05-15T09:45:08.125310949Z" level=info msg="StartContainer for \"5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d\"" May 15 09:45:08.127364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472561260.mount: Deactivated successfully. May 15 09:45:08.155742 systemd[1]: Started cri-containerd-5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d.scope - libcontainer container 5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d. May 15 09:45:08.177876 containerd[1470]: time="2025-05-15T09:45:08.177833155Z" level=info msg="StartContainer for \"5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d\" returns successfully" May 15 09:45:08.441688 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 09:45:09.105823 kubelet[2543]: E0515 09:45:09.105760 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:09.119409 kubelet[2543]: I0515 09:45:09.119318 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5qkb" podStartSLOduration=6.119303592 podStartE2EDuration="6.119303592s" podCreationTimestamp="2025-05-15 09:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:45:09.119032476 +0000 UTC m=+76.314036702" watchObservedRunningTime="2025-05-15 09:45:09.119303592 +0000 UTC m=+76.314307818" May 15 09:45:10.108014 kubelet[2543]: E0515 09:45:10.107965 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:11.288091 systemd-networkd[1387]: lxc_health: Link UP May 15 09:45:11.294691 systemd-networkd[1387]: lxc_health: Gained carrier May 15 09:45:12.095608 kubelet[2543]: E0515 09:45:12.095545 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:12.111946 kubelet[2543]: E0515 09:45:12.111911 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:12.418788 systemd[1]: run-containerd-runc-k8s.io-5c0c7ccb92a856ceb8414ee8b9b8189fbb18977062402e2cc3402ebe08f1ba9d-runc.h2GsuK.mount: Deactivated successfully. May 15 09:45:12.526736 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 15 09:45:13.114046 kubelet[2543]: E0515 09:45:13.114007 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:16.761536 sshd[4364]: Connection closed by 10.0.0.1 port 60818 May 15 09:45:16.762313 sshd-session[4358]: pam_unix(sshd:session): session closed for user core May 15 09:45:16.765460 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:60818.service: Deactivated successfully. May 15 09:45:16.767073 systemd[1]: session-25.scope: Deactivated successfully. May 15 09:45:16.767733 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. May 15 09:45:16.768705 systemd-logind[1451]: Removed session 25. May 15 09:45:16.887283 kubelet[2543]: E0515 09:45:16.887229 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:16.888120 kubelet[2543]: E0515 09:45:16.887917 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:45:17.887291 kubelet[2543]: E0515 09:45:17.887214 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"