May 8 00:25:39.904999 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:25:39.905020 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:25:39.905036 kernel: KASLR enabled May 8 00:25:39.905042 kernel: efi: EFI v2.7 by EDK II May 8 00:25:39.905048 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:25:39.905054 kernel: random: crng init done May 8 00:25:39.905061 kernel: ACPI: Early table checksum verification disabled May 8 00:25:39.905067 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:25:39.905073 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:25:39.905081 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905087 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905093 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905099 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905105 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905113 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905120 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905127 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905133 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:25:39.905139 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:25:39.905146 kernel: NUMA: Failed to initialise from firmware May 8 00:25:39.905152 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:25:39.905158 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:25:39.905165 kernel: Zone ranges: May 8 00:25:39.905171 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:25:39.905177 kernel: DMA32 empty May 8 00:25:39.905184 kernel: Normal empty May 8 00:25:39.905191 kernel: Movable zone start for each node May 8 00:25:39.905197 kernel: Early memory node ranges May 8 00:25:39.905203 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:25:39.905209 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:25:39.905216 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:25:39.905222 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:25:39.905228 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:25:39.905235 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:25:39.905241 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:25:39.905247 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:25:39.905254 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:25:39.905261 kernel: psci: probing for conduit method from ACPI. May 8 00:25:39.905267 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:25:39.905274 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:25:39.905282 kernel: psci: Trusted OS migration not required May 8 00:25:39.905289 kernel: psci: SMC Calling Convention v1.1 May 8 00:25:39.905296 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:25:39.905303 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:25:39.905310 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:25:39.905317 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:25:39.905324 kernel: Detected PIPT I-cache on CPU0 May 8 00:25:39.905331 kernel: CPU features: detected: GIC system register CPU interface May 8 00:25:39.905337 kernel: CPU features: detected: Hardware dirty bit management May 8 00:25:39.905344 kernel: CPU features: detected: Spectre-v4 May 8 00:25:39.905350 kernel: CPU features: detected: Spectre-BHB May 8 00:25:39.905357 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:25:39.905364 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:25:39.905372 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:25:39.905379 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:25:39.905385 kernel: alternatives: applying boot alternatives May 8 00:25:39.905393 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:25:39.905400 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:25:39.905407 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:25:39.905414 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:25:39.905420 kernel: Fallback order for Node 0: 0 May 8 00:25:39.905427 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:25:39.905434 kernel: Policy zone: DMA May 8 00:25:39.905440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:25:39.905448 kernel: software IO TLB: area num 4. May 8 00:25:39.905455 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:25:39.905462 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:25:39.905469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:25:39.905476 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:25:39.905483 kernel: rcu: RCU event tracing is enabled. May 8 00:25:39.905490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:25:39.905497 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:25:39.905504 kernel: Tracing variant of Tasks RCU enabled. May 8 00:25:39.905511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:25:39.905517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:25:39.905524 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:25:39.905532 kernel: GICv3: 256 SPIs implemented May 8 00:25:39.905539 kernel: GICv3: 0 Extended SPIs implemented May 8 00:25:39.905545 kernel: Root IRQ handler: gic_handle_irq May 8 00:25:39.905552 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:25:39.905559 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:25:39.905565 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:25:39.905572 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:25:39.905579 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:25:39.905587 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:25:39.905593 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:25:39.905600 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:25:39.905608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:25:39.905615 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:25:39.905622 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:25:39.905629 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:25:39.905636 kernel: arm-pv: using stolen time PV May 8 00:25:39.905643 kernel: Console: colour dummy device 80x25 May 8 00:25:39.905649 kernel: ACPI: Core revision 20230628 May 8 00:25:39.905657 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:25:39.905664 kernel: pid_max: default: 32768 minimum: 301 May 8 00:25:39.905670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:25:39.905678 kernel: landlock: Up and running. May 8 00:25:39.905685 kernel: SELinux: Initializing. May 8 00:25:39.905692 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:25:39.905699 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:25:39.905706 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:25:39.905713 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:25:39.905720 kernel: rcu: Hierarchical SRCU implementation. May 8 00:25:39.905727 kernel: rcu: Max phase no-delay instances is 400. May 8 00:25:39.905734 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:25:39.905742 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:25:39.905749 kernel: Remapping and enabling EFI services. May 8 00:25:39.905756 kernel: smp: Bringing up secondary CPUs ... May 8 00:25:39.905762 kernel: Detected PIPT I-cache on CPU1 May 8 00:25:39.905770 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:25:39.905776 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:25:39.905783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:25:39.905790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:25:39.905797 kernel: Detected PIPT I-cache on CPU2 May 8 00:25:39.905804 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:25:39.905812 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:25:39.905819 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:25:39.905830 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:25:39.905838 kernel: Detected PIPT I-cache on CPU3 May 8 00:25:39.905846 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:25:39.905853 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:25:39.905860 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:25:39.905867 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:25:39.905875 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:25:39.905883 kernel: SMP: Total of 4 processors activated. May 8 00:25:39.905891 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:25:39.905898 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:25:39.905905 kernel: CPU features: detected: Common not Private translations May 8 00:25:39.905913 kernel: CPU features: detected: CRC32 instructions May 8 00:25:39.905920 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:25:39.905927 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:25:39.905934 kernel: CPU features: detected: LSE atomic instructions May 8 00:25:39.905943 kernel: CPU features: detected: Privileged Access Never May 8 00:25:39.905950 kernel: CPU features: detected: RAS Extension Support May 8 00:25:39.905957 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:25:39.905965 kernel: CPU: All CPU(s) started at EL1 May 8 00:25:39.905972 kernel: alternatives: applying system-wide alternatives May 8 00:25:39.905979 kernel: devtmpfs: initialized May 8 00:25:39.906000 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:25:39.906008 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:25:39.906015 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:25:39.906028 kernel: SMBIOS 3.0.0 present. May 8 00:25:39.906036 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:25:39.906044 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:25:39.906051 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:25:39.906058 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:25:39.906066 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:25:39.906073 kernel: audit: initializing netlink subsys (disabled) May 8 00:25:39.906081 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 00:25:39.906088 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:25:39.906097 kernel: cpuidle: using governor menu May 8 00:25:39.906104 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:25:39.906112 kernel: ASID allocator initialised with 32768 entries May 8 00:25:39.906119 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:25:39.906126 kernel: Serial: AMBA PL011 UART driver May 8 00:25:39.906134 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:25:39.906141 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:25:39.906148 kernel: Modules: 509024 pages in range for PLT usage May 8 00:25:39.906155 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:25:39.906164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:25:39.906171 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:25:39.906179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:25:39.906186 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:25:39.906193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:25:39.906200 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:25:39.906208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:25:39.906215 kernel: ACPI: Added _OSI(Module Device) May 8 00:25:39.906222 kernel: ACPI: Added _OSI(Processor Device) May 8 00:25:39.906231 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:25:39.906238 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:25:39.906245 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:25:39.906252 kernel: ACPI: Interpreter enabled May 8 00:25:39.906260 kernel: ACPI: Using GIC for interrupt routing May 8 00:25:39.906267 kernel: ACPI: MCFG table detected, 1 entries May 8 00:25:39.906274 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:25:39.906281 kernel: printk: console [ttyAMA0] enabled May 8 00:25:39.906289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:25:39.906413 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:25:39.906488 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:25:39.906553 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:25:39.906617 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:25:39.906681 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:25:39.906691 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:25:39.906698 kernel: PCI host bridge to bus 0000:00 May 8 00:25:39.906770 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:25:39.906829 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:25:39.906887 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:25:39.906944 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:25:39.907105 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:25:39.907194 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:25:39.907268 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:25:39.907334 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:25:39.907402 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:25:39.907468 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:25:39.907533 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:25:39.907600 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:25:39.907661 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:25:39.907722 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:25:39.907779 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:25:39.907789 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:25:39.907797 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:25:39.907805 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:25:39.907812 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:25:39.907820 kernel: iommu: Default domain type: Translated May 8 00:25:39.907827 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:25:39.907834 kernel: efivars: Registered efivars operations May 8 00:25:39.907843 kernel: vgaarb: loaded May 8 00:25:39.907850 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:25:39.907858 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:25:39.907865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:25:39.907872 kernel: pnp: PnP ACPI init May 8 00:25:39.907942 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:25:39.907952 kernel: pnp: PnP ACPI: found 1 devices May 8 00:25:39.907960 kernel: NET: Registered PF_INET protocol family May 8 00:25:39.907969 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:25:39.907977 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:25:39.907995 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:25:39.908004 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:25:39.908011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:25:39.908019 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:25:39.908032 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:25:39.908040 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:25:39.908047 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:25:39.908057 kernel: PCI: CLS 0 bytes, default 64 May 8 00:25:39.908064 kernel: kvm [1]: HYP mode not available May 8 00:25:39.908071 kernel: Initialise system trusted keyrings May 8 00:25:39.908078 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:25:39.908085 kernel: Key type asymmetric registered May 8 00:25:39.908092 kernel: Asymmetric key parser 'x509' registered May 8 00:25:39.908100 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:25:39.908107 kernel: io scheduler mq-deadline registered May 8 00:25:39.908114 kernel: io scheduler kyber registered May 8 00:25:39.908123 kernel: io scheduler bfq registered May 8 00:25:39.908130 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:25:39.908137 kernel: ACPI: button: Power Button [PWRB] May 8 00:25:39.908145 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:25:39.908219 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:25:39.908230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:25:39.908237 kernel: thunder_xcv, ver 1.0 May 8 00:25:39.908244 kernel: thunder_bgx, ver 1.0 May 8 00:25:39.908252 kernel: nicpf, ver 1.0 May 8 00:25:39.908261 kernel: nicvf, ver 1.0 May 8 00:25:39.908335 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:25:39.908398 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:25:39 UTC (1746663939) May 8 00:25:39.908408 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:25:39.908416 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:25:39.908423 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:25:39.908431 kernel: watchdog: Hard watchdog permanently disabled May 8 00:25:39.908438 kernel: NET: Registered PF_INET6 protocol family May 8 00:25:39.908447 kernel: Segment Routing with IPv6 May 8 00:25:39.908454 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:25:39.908461 kernel: NET: Registered PF_PACKET protocol family May 8 00:25:39.908468 kernel: Key type dns_resolver registered May 8 00:25:39.908476 kernel: registered taskstats version 1 May 8 00:25:39.908483 kernel: Loading compiled-in X.509 certificates May 8 00:25:39.908490 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:25:39.908497 kernel: Key type .fscrypt registered May 8 00:25:39.908504 kernel: Key type fscrypt-provisioning registered May 8 00:25:39.908513 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:25:39.908520 kernel: ima: Allocated hash algorithm: sha1 May 8 00:25:39.908528 kernel: ima: No architecture policies found May 8 00:25:39.908535 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:25:39.908542 kernel: clk: Disabling unused clocks May 8 00:25:39.908549 kernel: Freeing unused kernel memory: 39424K May 8 00:25:39.908557 kernel: Run /init as init process May 8 00:25:39.908564 kernel: with arguments: May 8 00:25:39.908571 kernel: /init May 8 00:25:39.908579 kernel: with environment: May 8 00:25:39.908586 kernel: HOME=/ May 8 00:25:39.908593 kernel: TERM=linux May 8 00:25:39.908601 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:25:39.908610 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:25:39.908619 systemd[1]: Detected virtualization kvm. May 8 00:25:39.908627 systemd[1]: Detected architecture arm64. May 8 00:25:39.908636 systemd[1]: Running in initrd. May 8 00:25:39.908644 systemd[1]: No hostname configured, using default hostname. May 8 00:25:39.908652 systemd[1]: Hostname set to . May 8 00:25:39.908660 systemd[1]: Initializing machine ID from VM UUID. May 8 00:25:39.908667 systemd[1]: Queued start job for default target initrd.target. May 8 00:25:39.908675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:25:39.908683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:25:39.908691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:25:39.908700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:25:39.908709 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:25:39.908717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:25:39.908726 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:25:39.908734 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:25:39.908742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:25:39.908750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:25:39.908759 systemd[1]: Reached target paths.target - Path Units. May 8 00:25:39.908767 systemd[1]: Reached target slices.target - Slice Units. May 8 00:25:39.908775 systemd[1]: Reached target swap.target - Swaps. May 8 00:25:39.908783 systemd[1]: Reached target timers.target - Timer Units. May 8 00:25:39.908791 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:25:39.908799 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:25:39.908806 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:25:39.908814 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:25:39.908822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:25:39.908831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:25:39.908839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:25:39.908847 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:25:39.908855 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:25:39.908863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:25:39.908870 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:25:39.908878 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:25:39.908886 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:25:39.908895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:25:39.908903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:25:39.908911 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:25:39.908919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:25:39.908927 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:25:39.908935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:25:39.908945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:25:39.908967 systemd-journald[238]: Collecting audit messages is disabled. May 8 00:25:39.909018 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:25:39.909039 systemd-journald[238]: Journal started May 8 00:25:39.909058 systemd-journald[238]: Runtime Journal (/run/log/journal/3cd4f5df9d4d4dffb67b952846a84679) is 5.9M, max 47.3M, 41.4M free. May 8 00:25:39.902809 systemd-modules-load[239]: Inserted module 'overlay' May 8 00:25:39.911741 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:25:39.914036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:25:39.917836 systemd-modules-load[239]: Inserted module 'br_netfilter' May 8 00:25:39.918731 kernel: Bridge firewalling registered May 8 00:25:39.927105 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:25:39.928709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:25:39.932139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:25:39.933410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:25:39.939182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:25:39.941329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:25:39.944277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:25:39.945950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:25:39.956164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:25:39.957216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:25:39.960281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:25:39.968675 dracut-cmdline[275]: dracut-dracut-053 May 8 00:25:39.971007 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:25:39.986045 systemd-resolved[279]: Positive Trust Anchors: May 8 00:25:39.986061 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:25:39.986092 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:25:39.990660 systemd-resolved[279]: Defaulting to hostname 'linux'. May 8 00:25:39.991550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:25:39.995172 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:25:40.035012 kernel: SCSI subsystem initialized May 8 00:25:40.041005 kernel: Loading iSCSI transport class v2.0-870. May 8 00:25:40.049038 kernel: iscsi: registered transport (tcp) May 8 00:25:40.064026 kernel: iscsi: registered transport (qla4xxx) May 8 00:25:40.064061 kernel: QLogic iSCSI HBA Driver May 8 00:25:40.104856 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:25:40.112206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:25:40.129051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:25:40.129107 kernel: device-mapper: uevent: version 1.0.3 May 8 00:25:40.129129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:25:40.175041 kernel: raid6: neonx8 gen() 15752 MB/s May 8 00:25:40.192031 kernel: raid6: neonx4 gen() 15602 MB/s May 8 00:25:40.209012 kernel: raid6: neonx2 gen() 13284 MB/s May 8 00:25:40.226018 kernel: raid6: neonx1 gen() 10454 MB/s May 8 00:25:40.243045 kernel: raid6: int64x8 gen() 6936 MB/s May 8 00:25:40.260014 kernel: raid6: int64x4 gen() 7316 MB/s May 8 00:25:40.277029 kernel: raid6: int64x2 gen() 6111 MB/s May 8 00:25:40.294106 kernel: raid6: int64x1 gen() 5043 MB/s May 8 00:25:40.294139 kernel: raid6: using algorithm neonx8 gen() 15752 MB/s May 8 00:25:40.312080 kernel: raid6: .... xor() 11905 MB/s, rmw enabled May 8 00:25:40.312092 kernel: raid6: using neon recovery algorithm May 8 00:25:40.317372 kernel: xor: measuring software checksum speed May 8 00:25:40.317389 kernel: 8regs : 19731 MB/sec May 8 00:25:40.318069 kernel: 32regs : 19617 MB/sec May 8 00:25:40.319292 kernel: arm64_neon : 26874 MB/sec May 8 00:25:40.319304 kernel: xor: using function: arm64_neon (26874 MB/sec) May 8 00:25:40.370015 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:25:40.380578 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:25:40.391177 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:25:40.402387 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 8 00:25:40.405493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:25:40.419257 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:25:40.429920 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 8 00:25:40.454612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:25:40.465126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:25:40.502619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:25:40.509313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:25:40.521428 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:25:40.523455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:25:40.524713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:25:40.526962 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:25:40.537207 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:25:40.542915 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:25:40.552328 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:25:40.552434 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:25:40.552446 kernel: GPT:9289727 != 19775487 May 8 00:25:40.552456 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:25:40.552465 kernel: GPT:9289727 != 19775487 May 8 00:25:40.552474 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:25:40.552484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:25:40.546045 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:25:40.551747 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:25:40.551852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:25:40.554246 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:25:40.555940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:25:40.556092 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:25:40.558631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:25:40.569258 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) May 8 00:25:40.569253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:25:40.574004 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (520) May 8 00:25:40.578538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:25:40.584027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:25:40.589762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:25:40.598928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:25:40.600146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:25:40.605638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:25:40.622195 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:25:40.623865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:25:40.627669 disk-uuid[554]: Primary Header is updated. May 8 00:25:40.627669 disk-uuid[554]: Secondary Entries is updated. May 8 00:25:40.627669 disk-uuid[554]: Secondary Header is updated. May 8 00:25:40.633651 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:25:40.646936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:25:41.644017 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:25:41.644823 disk-uuid[555]: The operation has completed successfully. May 8 00:25:41.662754 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:25:41.662856 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:25:41.689167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:25:41.692002 sh[577]: Success May 8 00:25:41.706012 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:25:41.742079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:25:41.758322 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:25:41.760589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:25:41.770464 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:25:41.770510 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:25:41.770522 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:25:41.772478 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:25:41.772509 kernel: BTRFS info (device dm-0): using free space tree May 8 00:25:41.776658 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:25:41.778019 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:25:41.788210 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:25:41.790276 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:25:41.796780 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:25:41.796823 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:25:41.796834 kernel: BTRFS info (device vda6): using free space tree May 8 00:25:41.800022 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:25:41.807504 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:25:41.809616 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:25:41.815010 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:25:41.824177 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:25:41.883721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:25:41.894193 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:25:41.925867 systemd-networkd[769]: lo: Link UP May 8 00:25:41.925878 systemd-networkd[769]: lo: Gained carrier May 8 00:25:41.927075 ignition[669]: Ignition 2.19.0 May 8 00:25:41.926578 systemd-networkd[769]: Enumeration completed May 8 00:25:41.927082 ignition[669]: Stage: fetch-offline May 8 00:25:41.926779 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:25:41.927124 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:25:41.927129 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:25:41.927133 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:41.927132 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:25:41.927332 ignition[669]: parsed url from cmdline: "" May 8 00:25:41.928046 systemd-networkd[769]: eth0: Link UP May 8 00:25:41.927335 ignition[669]: no config URL provided May 8 00:25:41.928049 systemd-networkd[769]: eth0: Gained carrier May 8 00:25:41.927340 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:25:41.928056 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:25:41.927347 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:25:41.928320 systemd[1]: Reached target network.target - Network. May 8 00:25:41.927368 ignition[669]: op(1): [started] loading QEMU firmware config module May 8 00:25:41.927372 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:25:41.934812 ignition[669]: op(1): [finished] loading QEMU firmware config module May 8 00:25:41.951059 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:25:41.982060 ignition[669]: parsing config with SHA512: f056a50e70ffb91c1431e2c2b923eea7656c6b0b75bd0e1dd31802f8aa01a252aca54e2595dd2894879df780a79b001f1a68bef48f075747ac726e643f4f5df8 May 8 00:25:41.987868 unknown[669]: fetched base config from "system" May 8 00:25:41.987883 unknown[669]: fetched user config from "qemu" May 8 00:25:41.988530 ignition[669]: fetch-offline: fetch-offline passed May 8 00:25:41.988610 ignition[669]: Ignition finished successfully May 8 00:25:41.991131 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:25:41.992431 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:25:41.996161 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:25:42.006823 ignition[777]: Ignition 2.19.0 May 8 00:25:42.006833 ignition[777]: Stage: kargs May 8 00:25:42.007004 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 8 00:25:42.007021 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:42.009507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:25:42.007869 ignition[777]: kargs: kargs passed May 8 00:25:42.007911 ignition[777]: Ignition finished successfully May 8 00:25:42.020285 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:25:42.030386 ignition[785]: Ignition 2.19.0 May 8 00:25:42.030395 ignition[785]: Stage: disks May 8 00:25:42.030541 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 8 00:25:42.030550 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:42.031432 ignition[785]: disks: disks passed May 8 00:25:42.031473 ignition[785]: Ignition finished successfully May 8 00:25:42.035011 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:25:42.036298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:25:42.037758 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:25:42.039703 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:25:42.041522 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:25:42.043480 systemd[1]: Reached target basic.target - Basic System. May 8 00:25:42.059174 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:25:42.068274 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:25:42.071617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:25:42.074727 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:25:42.118004 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:25:42.118302 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:25:42.119511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:25:42.137074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:25:42.138716 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:25:42.140171 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:25:42.140210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:25:42.147446 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) May 8 00:25:42.140234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:25:42.152097 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:25:42.152120 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:25:42.152130 kernel: BTRFS info (device vda6): using free space tree May 8 00:25:42.144469 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:25:42.146218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:25:42.156010 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:25:42.156834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:25:42.187869 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:25:42.192142 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 8 00:25:42.195952 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:25:42.199045 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:25:42.263681 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:25:42.271140 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:25:42.272696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:25:42.279016 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:25:42.291815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:25:42.295543 ignition[918]: INFO : Ignition 2.19.0 May 8 00:25:42.295543 ignition[918]: INFO : Stage: mount May 8 00:25:42.297049 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:25:42.297049 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:42.297049 ignition[918]: INFO : mount: mount passed May 8 00:25:42.297049 ignition[918]: INFO : Ignition finished successfully May 8 00:25:42.298150 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:25:42.307090 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:25:42.769032 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:25:42.785235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:25:42.791593 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) May 8 00:25:42.791642 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:25:42.791663 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:25:42.793193 kernel: BTRFS info (device vda6): using free space tree May 8 00:25:42.795011 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:25:42.796350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:25:42.811534 ignition[947]: INFO : Ignition 2.19.0 May 8 00:25:42.811534 ignition[947]: INFO : Stage: files May 8 00:25:42.813122 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:25:42.813122 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:42.813122 ignition[947]: DEBUG : files: compiled without relabeling support, skipping May 8 00:25:42.816496 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:25:42.816496 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:25:42.816496 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:25:42.816496 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:25:42.816496 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:25:42.815699 unknown[947]: wrote ssh authorized keys file for user: core May 8 00:25:42.824006 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:25:42.824006 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:25:42.824006 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:25:42.824006 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:25:42.880296 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:25:43.176654 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:25:43.176654 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:25:43.180379 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 00:25:43.521616 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 8 00:25:43.607146 systemd-networkd[769]: eth0: Gained IPv6LL May 8 00:25:43.620670 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:25:43.622646 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:25:43.861466 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 8 00:25:44.191840 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:25:44.191840 ignition[947]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:25:44.195510 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:25:44.195510 ignition[947]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 00:25:44.195510 ignition[947]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:25:44.218737 ignition[947]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:25:44.225400 ignition[947]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:25:44.226874 ignition[947]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:25:44.226874 ignition[947]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 8 00:25:44.226874 ignition[947]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:25:44.226874 ignition[947]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:25:44.226874 ignition[947]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:25:44.226874 ignition[947]: INFO : files: files passed May 8 00:25:44.226874 ignition[947]: INFO : Ignition finished successfully May 8 00:25:44.227417 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:25:44.238187 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:25:44.249129 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:25:44.253141 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:25:44.253230 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:25:44.258539 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:25:44.261822 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:25:44.261822 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:25:44.265265 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:25:44.265692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:25:44.268111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:25:44.281473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:25:44.299461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:25:44.299588 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:25:44.301774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:25:44.303690 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:25:44.305534 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:25:44.306283 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:25:44.321577 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:25:44.324069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:25:44.335251 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:25:44.336512 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:25:44.338569 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:25:44.340376 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:25:44.340497 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:25:44.343059 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:25:44.345080 systemd[1]: Stopped target basic.target - Basic System. May 8 00:25:44.346753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:25:44.348508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:25:44.350444 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:25:44.352404 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:25:44.354364 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:25:44.356283 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:25:44.358175 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:25:44.359928 systemd[1]: Stopped target swap.target - Swaps. May 8 00:25:44.361487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:25:44.361615 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:25:44.363855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:25:44.365092 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:25:44.366959 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:25:44.368082 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:25:44.370155 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:25:44.370266 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:25:44.373101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:25:44.373258 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:25:44.375329 systemd[1]: Stopped target paths.target - Path Units. May 8 00:25:44.376902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:25:44.383037 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:25:44.384383 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:25:44.386499 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:25:44.388044 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:25:44.388175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:25:44.389743 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:25:44.389874 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:25:44.391395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:25:44.391545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:25:44.393297 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:25:44.393442 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:25:44.407219 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:25:44.409017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:25:44.409214 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:25:44.414373 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:25:44.416295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:25:44.417385 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:25:44.420413 ignition[1001]: INFO : Ignition 2.19.0 May 8 00:25:44.420413 ignition[1001]: INFO : Stage: umount May 8 00:25:44.420413 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:25:44.420413 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:25:44.420413 ignition[1001]: INFO : umount: umount passed May 8 00:25:44.420413 ignition[1001]: INFO : Ignition finished successfully May 8 00:25:44.419698 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:25:44.419802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:25:44.424811 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:25:44.425467 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:25:44.425566 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:25:44.428872 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:25:44.429076 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:25:44.433841 systemd[1]: Stopped target network.target - Network. May 8 00:25:44.435094 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:25:44.435166 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:25:44.436848 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:25:44.436892 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:25:44.438569 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:25:44.438610 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:25:44.440259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:25:44.440301 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:25:44.442251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:25:44.444210 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:25:44.451034 systemd-networkd[769]: eth0: DHCPv6 lease lost May 8 00:25:44.452658 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:25:44.452770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:25:44.454211 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:25:44.454242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:25:44.462074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:25:44.462925 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:25:44.463013 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:25:44.465135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:25:44.469165 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:25:44.469263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:25:44.473228 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:25:44.473306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:25:44.478381 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:25:44.478430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:25:44.480249 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:25:44.480297 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:25:44.483547 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:25:44.483654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:25:44.485767 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:25:44.485896 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:25:44.489323 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:25:44.489387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:25:44.491060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:25:44.491096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:25:44.492864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:25:44.492917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:25:44.497075 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:25:44.497135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:25:44.499833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:25:44.499881 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:25:44.516136 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:25:44.517304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:25:44.517369 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:25:44.519542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:25:44.519589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:25:44.521738 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:25:44.523459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:25:44.524628 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:25:44.524702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:25:44.527204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:25:44.528340 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:25:44.528406 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:25:44.530829 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:25:44.539887 systemd[1]: Switching root. May 8 00:25:44.573820 systemd-journald[238]: Journal stopped May 8 00:25:45.300851 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 8 00:25:45.300908 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:25:45.300919 kernel: SELinux: policy capability open_perms=1 May 8 00:25:45.300929 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:25:45.300938 kernel: SELinux: policy capability always_check_network=0 May 8 00:25:45.300948 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:25:45.300961 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:25:45.300971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:25:45.300980 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:25:45.301023 kernel: audit: type=1403 audit(1746663944.768:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:25:45.301040 systemd[1]: Successfully loaded SELinux policy in 33.361ms. May 8 00:25:45.301060 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.381ms. May 8 00:25:45.301071 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:25:45.301082 systemd[1]: Detected virtualization kvm. May 8 00:25:45.301093 systemd[1]: Detected architecture arm64. May 8 00:25:45.301103 systemd[1]: Detected first boot. May 8 00:25:45.301113 systemd[1]: Initializing machine ID from VM UUID. May 8 00:25:45.301124 zram_generator::config[1067]: No configuration found. May 8 00:25:45.301136 systemd[1]: Populated /etc with preset unit settings. May 8 00:25:45.301147 systemd[1]: Queued start job for default target multi-user.target. May 8 00:25:45.301158 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:25:45.301169 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:25:45.301180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:25:45.301190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:25:45.301201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:25:45.301212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:25:45.301224 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:25:45.301235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:25:45.301245 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:25:45.301256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:25:45.301267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:25:45.301279 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:25:45.301289 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:25:45.301300 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:25:45.301310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:25:45.301323 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:25:45.301333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:25:45.301343 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:25:45.301353 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:25:45.301364 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:25:45.301374 systemd[1]: Reached target slices.target - Slice Units. May 8 00:25:45.301384 systemd[1]: Reached target swap.target - Swaps. May 8 00:25:45.301395 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:25:45.301407 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:25:45.301418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:25:45.301429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:25:45.301439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:25:45.301450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:25:45.301475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:25:45.301486 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:25:45.301497 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:25:45.301507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:25:45.301518 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:25:45.301531 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:25:45.301541 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:25:45.301552 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:25:45.301563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:25:45.301578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:25:45.301588 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:25:45.301598 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:25:45.301609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:25:45.301621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:25:45.301632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:25:45.301643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:25:45.301653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:25:45.301663 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:25:45.301674 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:25:45.301684 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 8 00:25:45.301694 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:25:45.301704 kernel: fuse: init (API version 7.39) May 8 00:25:45.301716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:25:45.301726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:25:45.301736 kernel: loop: module loaded May 8 00:25:45.301746 kernel: ACPI: bus type drm_connector registered May 8 00:25:45.301756 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:25:45.301766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:25:45.301793 systemd-journald[1152]: Collecting audit messages is disabled. May 8 00:25:45.301815 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:25:45.301840 systemd-journald[1152]: Journal started May 8 00:25:45.301861 systemd-journald[1152]: Runtime Journal (/run/log/journal/3cd4f5df9d4d4dffb67b952846a84679) is 5.9M, max 47.3M, 41.4M free. May 8 00:25:45.303815 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:25:45.304740 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:25:45.305936 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:25:45.307088 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:25:45.308219 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:25:45.309385 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:25:45.310623 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:25:45.312114 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:25:45.313526 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:25:45.313685 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:25:45.315101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:25:45.315254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:25:45.316702 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:25:45.316856 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:25:45.318145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:25:45.318297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:25:45.319881 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:25:45.320058 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:25:45.321320 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:25:45.321528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:25:45.322975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:25:45.324436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:25:45.326107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:25:45.336823 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:25:45.349100 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:25:45.351094 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:25:45.352204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:25:45.354152 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:25:45.356164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:25:45.357279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:25:45.359180 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:25:45.360368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:25:45.363164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:25:45.368096 systemd-journald[1152]: Time spent on flushing to /var/log/journal/3cd4f5df9d4d4dffb67b952846a84679 is 14.040ms for 846 entries. May 8 00:25:45.368096 systemd-journald[1152]: System Journal (/var/log/journal/3cd4f5df9d4d4dffb67b952846a84679) is 8.0M, max 195.6M, 187.6M free. May 8 00:25:45.391197 systemd-journald[1152]: Received client request to flush runtime journal. May 8 00:25:45.365135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:25:45.368222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:25:45.370618 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:25:45.371829 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:25:45.382037 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:25:45.383742 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:25:45.385172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:25:45.391618 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:25:45.391628 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:25:45.395292 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:25:45.396970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:25:45.415317 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:25:45.416808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:25:45.419168 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:25:45.434759 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:25:45.445320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:25:45.456239 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 8 00:25:45.456259 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 8 00:25:45.459752 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:25:45.781940 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:25:45.798133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:25:45.821373 systemd-udevd[1225]: Using default interface naming scheme 'v255'. May 8 00:25:45.833381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:25:45.845213 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:25:45.857135 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:25:45.860423 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 8 00:25:45.881015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1244) May 8 00:25:45.906410 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:25:45.917877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:25:45.969709 systemd-networkd[1232]: lo: Link UP May 8 00:25:45.969724 systemd-networkd[1232]: lo: Gained carrier May 8 00:25:45.970426 systemd-networkd[1232]: Enumeration completed May 8 00:25:45.970953 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:25:45.970956 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:25:45.972735 systemd-networkd[1232]: eth0: Link UP May 8 00:25:45.972738 systemd-networkd[1232]: eth0: Gained carrier May 8 00:25:45.972751 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:25:45.974235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:25:45.975473 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:25:45.978334 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:25:45.984185 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:25:45.990086 systemd-networkd[1232]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:25:45.993158 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:25:46.006218 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:25:46.028623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:25:46.046400 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:25:46.048031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:25:46.059235 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:25:46.062693 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:25:46.107523 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:25:46.109120 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:25:46.110369 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:25:46.110402 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:25:46.111412 systemd[1]: Reached target machines.target - Containers. May 8 00:25:46.113368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:25:46.122137 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:25:46.124470 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:25:46.125698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:25:46.126639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:25:46.128926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:25:46.134169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:25:46.136513 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:25:46.151211 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:25:46.158170 kernel: loop0: detected capacity change from 0 to 114432 May 8 00:25:46.159111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:25:46.161214 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:25:46.171025 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:25:46.199017 kernel: loop1: detected capacity change from 0 to 194096 May 8 00:25:46.240021 kernel: loop2: detected capacity change from 0 to 114328 May 8 00:25:46.294014 kernel: loop3: detected capacity change from 0 to 114432 May 8 00:25:46.300016 kernel: loop4: detected capacity change from 0 to 194096 May 8 00:25:46.307014 kernel: loop5: detected capacity change from 0 to 114328 May 8 00:25:46.316360 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:25:46.316828 (sd-merge)[1291]: Merged extensions into '/usr'. May 8 00:25:46.320351 systemd[1]: Reloading requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:25:46.320368 systemd[1]: Reloading... May 8 00:25:46.370024 zram_generator::config[1319]: No configuration found. May 8 00:25:46.412400 ldconfig[1276]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:25:46.477221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:25:46.519850 systemd[1]: Reloading finished in 199 ms. May 8 00:25:46.536734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:25:46.538251 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:25:46.553125 systemd[1]: Starting ensure-sysext.service... May 8 00:25:46.555046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:25:46.558628 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... May 8 00:25:46.558645 systemd[1]: Reloading... May 8 00:25:46.571938 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:25:46.572241 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:25:46.572860 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:25:46.573107 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. May 8 00:25:46.573153 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. May 8 00:25:46.575534 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:25:46.575547 systemd-tmpfiles[1361]: Skipping /boot May 8 00:25:46.583418 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:25:46.583435 systemd-tmpfiles[1361]: Skipping /boot May 8 00:25:46.594074 zram_generator::config[1387]: No configuration found. May 8 00:25:46.690954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:25:46.734115 systemd[1]: Reloading finished in 175 ms. May 8 00:25:46.747973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:25:46.778107 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:25:46.780689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:25:46.782044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:25:46.783170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:25:46.788227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:25:46.792815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:25:46.794245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:25:46.798281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:25:46.804522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:25:46.815384 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:25:46.817711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:25:46.817870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:25:46.822017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:25:46.822186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:25:46.823929 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:25:46.824160 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:25:46.826002 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:25:46.834570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:25:46.839346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:25:46.844393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:25:46.847832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:25:46.848973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:25:46.851300 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:25:46.854320 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:25:46.856745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:25:46.857033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:25:46.858291 augenrules[1470]: No rules May 8 00:25:46.859021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:25:46.859163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:25:46.861445 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:25:46.863328 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:25:46.863672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:25:46.868427 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:25:46.871463 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:25:46.878149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:25:46.893137 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:25:46.894308 systemd-resolved[1447]: Positive Trust Anchors: May 8 00:25:46.894326 systemd-resolved[1447]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:25:46.894358 systemd-resolved[1447]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:25:46.895366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:25:46.899164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:25:46.904172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:25:46.905341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:25:46.905407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:25:46.906035 systemd[1]: Finished ensure-sysext.service. May 8 00:25:46.906800 systemd-resolved[1447]: Defaulting to hostname 'linux'. May 8 00:25:46.907430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:25:46.907584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:25:46.908937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:25:46.910332 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:25:46.910486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:25:46.912066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:25:46.912215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:25:46.913664 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:25:46.913853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:25:46.919807 systemd[1]: Reached target network.target - Network. May 8 00:25:46.920763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:25:46.921953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:25:46.922052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:25:46.923973 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:25:46.970310 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:25:46.971280 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:25:46.971331 systemd-timesyncd[1503]: Initial clock synchronization to Thu 2025-05-08 00:25:47.141602 UTC. May 8 00:25:46.971884 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:25:46.973051 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:25:46.974388 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:25:46.975643 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:25:46.976902 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:25:46.976934 systemd[1]: Reached target paths.target - Path Units. May 8 00:25:46.977837 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:25:46.979050 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:25:46.980198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:25:46.981403 systemd[1]: Reached target timers.target - Timer Units. May 8 00:25:46.983076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:25:46.985564 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:25:46.987778 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:25:46.995174 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:25:46.996328 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:25:46.997328 systemd[1]: Reached target basic.target - Basic System. May 8 00:25:46.998439 systemd[1]: System is tainted: cgroupsv1 May 8 00:25:46.998488 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:25:46.998508 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:25:46.999691 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:25:47.001892 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:25:47.004118 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:25:47.009229 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:25:47.010430 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:25:47.011538 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:25:47.015566 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:25:47.020332 jq[1509]: false May 8 00:25:47.020779 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:25:47.029190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:25:47.034918 extend-filesystems[1511]: Found loop3 May 8 00:25:47.037188 extend-filesystems[1511]: Found loop4 May 8 00:25:47.037188 extend-filesystems[1511]: Found loop5 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda May 8 00:25:47.037188 extend-filesystems[1511]: Found vda1 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda2 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda3 May 8 00:25:47.037188 extend-filesystems[1511]: Found usr May 8 00:25:47.037188 extend-filesystems[1511]: Found vda4 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda6 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda7 May 8 00:25:47.037188 extend-filesystems[1511]: Found vda9 May 8 00:25:47.037188 extend-filesystems[1511]: Checking size of /dev/vda9 May 8 00:25:47.036166 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:25:47.041877 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:25:47.052237 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:25:47.055714 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:25:47.056681 extend-filesystems[1511]: Resized partition /dev/vda9 May 8 00:25:47.058323 dbus-daemon[1508]: [system] SELinux support is enabled May 8 00:25:47.062370 extend-filesystems[1537]: resize2fs 1.47.1 (20-May-2024) May 8 00:25:47.070882 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:25:47.062380 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:25:47.075288 jq[1535]: true May 8 00:25:47.069309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:25:47.069573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:25:47.069852 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:25:47.070092 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:25:47.073571 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:25:47.073795 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:25:47.103570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1226) May 8 00:25:47.111756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:25:47.111793 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:25:47.113655 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:25:47.115145 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:25:47.115173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:25:47.131901 jq[1542]: true May 8 00:25:47.132156 tar[1539]: linux-arm64/helm May 8 00:25:47.150973 update_engine[1529]: I20250508 00:25:47.142316 1529 main.cc:92] Flatcar Update Engine starting May 8 00:25:47.160575 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:25:47.161040 systemd[1]: Started update-engine.service - Update Engine. May 8 00:25:47.227557 update_engine[1529]: I20250508 00:25:47.161583 1529 update_check_scheduler.cc:74] Next update check in 11m45s May 8 00:25:47.163604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:25:47.227751 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:25:47.227751 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:25:47.227751 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:25:47.171434 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:25:47.233946 extend-filesystems[1511]: Resized filesystem in /dev/vda9 May 8 00:25:47.227845 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:25:47.231405 systemd-logind[1524]: New seat seat0. May 8 00:25:47.234235 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:25:47.237548 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:25:47.239125 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:25:47.239370 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:25:47.263039 bash[1568]: Updated "/home/core/.ssh/authorized_keys" May 8 00:25:47.265465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:25:47.267567 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:25:47.365105 containerd[1543]: time="2025-05-08T00:25:47.364957190Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:25:47.395276 containerd[1543]: time="2025-05-08T00:25:47.395224684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396564795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396596741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396612510Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396771302Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396787030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396843283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:25:47.397054 containerd[1543]: time="2025-05-08T00:25:47.396855620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397246 containerd[1543]: time="2025-05-08T00:25:47.397071809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:25:47.397246 containerd[1543]: time="2025-05-08T00:25:47.397088355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397246 containerd[1543]: time="2025-05-08T00:25:47.397101468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:25:47.397246 containerd[1543]: time="2025-05-08T00:25:47.397111273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397246 containerd[1543]: time="2025-05-08T00:25:47.397189872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397467 containerd[1543]: time="2025-05-08T00:25:47.397384409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:25:47.397552 containerd[1543]: time="2025-05-08T00:25:47.397514564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:25:47.397552 containerd[1543]: time="2025-05-08T00:25:47.397535154Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:25:47.397642 containerd[1543]: time="2025-05-08T00:25:47.397608973Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:25:47.397678 containerd[1543]: time="2025-05-08T00:25:47.397647374Z" level=info msg="metadata content store policy set" policy=shared May 8 00:25:47.402746 containerd[1543]: time="2025-05-08T00:25:47.402715930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:25:47.402818 containerd[1543]: time="2025-05-08T00:25:47.402764217Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:25:47.402818 containerd[1543]: time="2025-05-08T00:25:47.402788320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:25:47.402818 containerd[1543]: time="2025-05-08T00:25:47.402806703Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:25:47.402869 containerd[1543]: time="2025-05-08T00:25:47.402825699Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:25:47.402997 containerd[1543]: time="2025-05-08T00:25:47.402976811Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406394576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406580575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406606476Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406631926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406652598Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:25:47.406708 containerd[1543]: time="2025-05-08T00:25:47.406672819Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407294 containerd[1543]: time="2025-05-08T00:25:47.406690508Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407392 containerd[1543]: time="2025-05-08T00:25:47.407374576Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407524 containerd[1543]: time="2025-05-08T00:25:47.407499665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407614 containerd[1543]: time="2025-05-08T00:25:47.407600774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407727 containerd[1543]: time="2025-05-08T00:25:47.407710176Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407786 containerd[1543]: time="2025-05-08T00:25:47.407773496Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:25:47.407912 containerd[1543]: time="2025-05-08T00:25:47.407897074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:25:47.407977 containerd[1543]: time="2025-05-08T00:25:47.407965624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408037319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408055171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408069020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408082052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408098801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408116490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408130502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408146312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408158078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408169680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408181853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408201258Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408225279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408240068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408556 containerd[1543]: time="2025-05-08T00:25:47.408250689Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408363849Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408380476Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408391343Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408410053Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408419980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408431623Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408440937Z" level=info msg="NRI interface is disabled by configuration." May 8 00:25:47.408885 containerd[1543]: time="2025-05-08T00:25:47.408451395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:25:47.409054 containerd[1543]: time="2025-05-08T00:25:47.408728250Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:25:47.409054 containerd[1543]: time="2025-05-08T00:25:47.408788016Z" level=info msg="Connect containerd service" May 8 00:25:47.409054 containerd[1543]: time="2025-05-08T00:25:47.408813426Z" level=info msg="using legacy CRI server" May 8 00:25:47.409054 containerd[1543]: time="2025-05-08T00:25:47.408820085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:25:47.409054 containerd[1543]: time="2025-05-08T00:25:47.408900237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:25:47.409636 containerd[1543]: time="2025-05-08T00:25:47.409585734Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:25:47.409821 containerd[1543]: time="2025-05-08T00:25:47.409774716Z" level=info msg="Start subscribing containerd event" May 8 00:25:47.409856 containerd[1543]: time="2025-05-08T00:25:47.409842857Z" level=info msg="Start recovering state" May 8 00:25:47.409934 containerd[1543]: time="2025-05-08T00:25:47.409922764Z" level=info msg="Start event monitor" May 8 00:25:47.409953 containerd[1543]: time="2025-05-08T00:25:47.409943067Z" level=info msg="Start snapshots syncer" May 8 00:25:47.409980 containerd[1543]: time="2025-05-08T00:25:47.409968069Z" level=info msg="Start cni network conf syncer for default" May 8 00:25:47.409980 containerd[1543]: time="2025-05-08T00:25:47.409976770Z" level=info msg="Start streaming server" May 8 00:25:47.410067 containerd[1543]: time="2025-05-08T00:25:47.410045933Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:25:47.410105 containerd[1543]: time="2025-05-08T00:25:47.410094792Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:25:47.410321 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:25:47.411856 containerd[1543]: time="2025-05-08T00:25:47.411542711Z" level=info msg="containerd successfully booted in 0.048130s" May 8 00:25:47.547736 tar[1539]: linux-arm64/LICENSE May 8 00:25:47.547864 tar[1539]: linux-arm64/README.md May 8 00:25:47.560403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:25:47.635820 sshd_keygen[1534]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:25:47.655121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:25:47.664306 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:25:47.669859 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:25:47.670123 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:25:47.672737 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:25:47.685823 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:25:47.696309 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:25:47.698842 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:25:47.700194 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:25:47.961854 systemd-networkd[1232]: eth0: Gained IPv6LL May 8 00:25:47.964376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:25:47.966359 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:25:47.979320 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:25:47.982555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:47.985612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:25:48.003786 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:25:48.005533 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:25:48.005793 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:25:48.009181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:25:48.551213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:48.553008 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:25:48.554265 systemd[1]: Startup finished in 5.636s (kernel) + 3.821s (userspace) = 9.458s. May 8 00:25:48.555594 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:49.073628 kubelet[1645]: E0508 00:25:49.073561 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:49.076314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:49.076547 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:53.030530 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:25:53.044236 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:55720.service - OpenSSH per-connection server daemon (10.0.0.1:55720). May 8 00:25:53.102784 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 55720 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.104488 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.115932 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:25:53.125240 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:25:53.129061 systemd-logind[1524]: New session 1 of user core. May 8 00:25:53.135061 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:25:53.140450 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:25:53.146674 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:25:53.220626 systemd[1665]: Queued start job for default target default.target. May 8 00:25:53.220956 systemd[1665]: Created slice app.slice - User Application Slice. May 8 00:25:53.220974 systemd[1665]: Reached target paths.target - Paths. May 8 00:25:53.220985 systemd[1665]: Reached target timers.target - Timers. May 8 00:25:53.230098 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:25:53.236139 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:25:53.236189 systemd[1665]: Reached target sockets.target - Sockets. May 8 00:25:53.236200 systemd[1665]: Reached target basic.target - Basic System. May 8 00:25:53.236235 systemd[1665]: Reached target default.target - Main User Target. May 8 00:25:53.236258 systemd[1665]: Startup finished in 84ms. May 8 00:25:53.236616 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:25:53.237863 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:25:53.303317 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). May 8 00:25:53.335981 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.336758 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.341175 systemd-logind[1524]: New session 2 of user core. May 8 00:25:53.350377 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:25:53.403197 sshd[1677]: pam_unix(sshd:session): session closed for user core May 8 00:25:53.416327 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:55740.service - OpenSSH per-connection server daemon (10.0.0.1:55740). May 8 00:25:53.416688 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:55734.service: Deactivated successfully. May 8 00:25:53.418313 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. May 8 00:25:53.418842 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:25:53.420032 systemd-logind[1524]: Removed session 2. May 8 00:25:53.452067 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 55740 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.453235 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.457620 systemd-logind[1524]: New session 3 of user core. May 8 00:25:53.469276 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:25:53.520873 sshd[1682]: pam_unix(sshd:session): session closed for user core May 8 00:25:53.538287 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:55752.service - OpenSSH per-connection server daemon (10.0.0.1:55752). May 8 00:25:53.538727 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:55740.service: Deactivated successfully. May 8 00:25:53.540261 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:25:53.543177 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. May 8 00:25:53.544888 systemd-logind[1524]: Removed session 3. May 8 00:25:53.575730 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 55752 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.576950 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.582059 systemd-logind[1524]: New session 4 of user core. May 8 00:25:53.592261 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:25:53.658075 sshd[1690]: pam_unix(sshd:session): session closed for user core May 8 00:25:53.667250 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:55756.service - OpenSSH per-connection server daemon (10.0.0.1:55756). May 8 00:25:53.667654 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:55752.service: Deactivated successfully. May 8 00:25:53.675214 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. May 8 00:25:53.675410 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:25:53.677090 systemd-logind[1524]: Removed session 4. May 8 00:25:53.711053 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 55756 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.712301 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.716298 systemd-logind[1524]: New session 5 of user core. May 8 00:25:53.727253 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:25:53.796535 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:25:53.799714 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:53.812896 sudo[1705]: pam_unix(sudo:session): session closed for user root May 8 00:25:53.814666 sshd[1698]: pam_unix(sshd:session): session closed for user core May 8 00:25:53.830286 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:55772.service - OpenSSH per-connection server daemon (10.0.0.1:55772). May 8 00:25:53.831256 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:55756.service: Deactivated successfully. May 8 00:25:53.832710 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:25:53.833780 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. May 8 00:25:53.835156 systemd-logind[1524]: Removed session 5. May 8 00:25:53.864276 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 55772 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:53.865531 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:53.870113 systemd-logind[1524]: New session 6 of user core. May 8 00:25:53.883334 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:25:53.935411 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:25:53.936035 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:53.938864 sudo[1715]: pam_unix(sudo:session): session closed for user root May 8 00:25:53.943263 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:25:53.943521 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:53.967389 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:25:53.968130 auditctl[1718]: No rules May 8 00:25:53.968491 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:25:53.968690 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:25:53.970853 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:25:53.995527 augenrules[1737]: No rules May 8 00:25:53.996606 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:25:53.998889 sudo[1714]: pam_unix(sudo:session): session closed for user root May 8 00:25:54.003871 sshd[1707]: pam_unix(sshd:session): session closed for user core May 8 00:25:54.010248 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). May 8 00:25:54.010613 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:55772.service: Deactivated successfully. May 8 00:25:54.012283 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. May 8 00:25:54.016576 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:25:54.018508 systemd-logind[1524]: Removed session 6. May 8 00:25:54.040967 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:25:54.042166 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:25:54.046014 systemd-logind[1524]: New session 7 of user core. May 8 00:25:54.059317 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:25:54.110765 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:25:54.113470 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:25:54.422236 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:25:54.422488 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:25:54.685433 dockerd[1768]: time="2025-05-08T00:25:54.685025188Z" level=info msg="Starting up" May 8 00:25:54.923933 dockerd[1768]: time="2025-05-08T00:25:54.923839577Z" level=info msg="Loading containers: start." May 8 00:25:55.042025 kernel: Initializing XFRM netlink socket May 8 00:25:55.122189 systemd-networkd[1232]: docker0: Link UP May 8 00:25:55.145336 dockerd[1768]: time="2025-05-08T00:25:55.145288860Z" level=info msg="Loading containers: done." May 8 00:25:55.157165 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2122715378-merged.mount: Deactivated successfully. May 8 00:25:55.158294 dockerd[1768]: time="2025-05-08T00:25:55.158160064Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:25:55.158294 dockerd[1768]: time="2025-05-08T00:25:55.158265390Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:25:55.158412 dockerd[1768]: time="2025-05-08T00:25:55.158363946Z" level=info msg="Daemon has completed initialization" May 8 00:25:55.189543 dockerd[1768]: time="2025-05-08T00:25:55.189323987Z" level=info msg="API listen on /run/docker.sock" May 8 00:25:55.189607 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:25:56.074086 containerd[1543]: time="2025-05-08T00:25:56.074036351Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:25:56.756482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77672018.mount: Deactivated successfully. May 8 00:25:58.215491 containerd[1543]: time="2025-05-08T00:25:58.215438343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:58.216459 containerd[1543]: time="2025-05-08T00:25:58.216422987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 8 00:25:58.217235 containerd[1543]: time="2025-05-08T00:25:58.217197084Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:58.220549 containerd[1543]: time="2025-05-08T00:25:58.220515959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:58.222253 containerd[1543]: time="2025-05-08T00:25:58.222217943Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.148136559s" May 8 00:25:58.222306 containerd[1543]: time="2025-05-08T00:25:58.222261395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:25:58.240541 containerd[1543]: time="2025-05-08T00:25:58.240508793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:25:59.327572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:25:59.337250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:25:59.428559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:25:59.432299 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:25:59.473896 kubelet[1997]: E0508 00:25:59.473840 1997 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:25:59.476836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:25:59.477101 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:25:59.998484 containerd[1543]: time="2025-05-08T00:25:59.998438344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:25:59.999775 containerd[1543]: time="2025-05-08T00:25:59.999546880Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 8 00:26:00.000534 containerd[1543]: time="2025-05-08T00:26:00.000477938Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:00.003510 containerd[1543]: time="2025-05-08T00:26:00.003476586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:00.004806 containerd[1543]: time="2025-05-08T00:26:00.004716584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.764167004s" May 8 00:26:00.004806 containerd[1543]: time="2025-05-08T00:26:00.004750189Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:26:00.026567 containerd[1543]: time="2025-05-08T00:26:00.026530691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:26:01.332001 containerd[1543]: time="2025-05-08T00:26:01.331936744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:01.332719 containerd[1543]: time="2025-05-08T00:26:01.332674519Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 8 00:26:01.334429 containerd[1543]: time="2025-05-08T00:26:01.334379943Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:01.346236 containerd[1543]: time="2025-05-08T00:26:01.346198479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:01.347308 containerd[1543]: time="2025-05-08T00:26:01.347251607Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.32068097s" May 8 00:26:01.347308 containerd[1543]: time="2025-05-08T00:26:01.347282307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:26:01.364757 containerd[1543]: time="2025-05-08T00:26:01.364712576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:26:02.334372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654597006.mount: Deactivated successfully. May 8 00:26:02.667451 containerd[1543]: time="2025-05-08T00:26:02.667341437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:02.669177 containerd[1543]: time="2025-05-08T00:26:02.669147292Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 00:26:02.672677 containerd[1543]: time="2025-05-08T00:26:02.672623215Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:02.682900 containerd[1543]: time="2025-05-08T00:26:02.682826738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:02.683584 containerd[1543]: time="2025-05-08T00:26:02.683302301Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.318543979s" May 8 00:26:02.683584 containerd[1543]: time="2025-05-08T00:26:02.683333591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:26:02.706653 containerd[1543]: time="2025-05-08T00:26:02.706615237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:26:03.263217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227917128.mount: Deactivated successfully. May 8 00:26:04.079130 containerd[1543]: time="2025-05-08T00:26:04.078462428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.079130 containerd[1543]: time="2025-05-08T00:26:04.078831639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 00:26:04.080671 containerd[1543]: time="2025-05-08T00:26:04.080246746Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.083272 containerd[1543]: time="2025-05-08T00:26:04.083245210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.084699 containerd[1543]: time="2025-05-08T00:26:04.084570320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.377916418s" May 8 00:26:04.084699 containerd[1543]: time="2025-05-08T00:26:04.084609205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:26:04.101618 containerd[1543]: time="2025-05-08T00:26:04.101587286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:26:04.576646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006045546.mount: Deactivated successfully. May 8 00:26:04.581278 containerd[1543]: time="2025-05-08T00:26:04.581235975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.582366 containerd[1543]: time="2025-05-08T00:26:04.582331941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 8 00:26:04.583261 containerd[1543]: time="2025-05-08T00:26:04.583206622Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.585348 containerd[1543]: time="2025-05-08T00:26:04.585318900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:04.586308 containerd[1543]: time="2025-05-08T00:26:04.586271111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 484.646023ms" May 8 00:26:04.586382 containerd[1543]: time="2025-05-08T00:26:04.586308954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:26:04.604105 containerd[1543]: time="2025-05-08T00:26:04.604068871Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:26:05.149722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984786715.mount: Deactivated successfully. May 8 00:26:08.559535 containerd[1543]: time="2025-05-08T00:26:08.559478045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:08.560008 containerd[1543]: time="2025-05-08T00:26:08.559955900Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 8 00:26:08.561020 containerd[1543]: time="2025-05-08T00:26:08.560728095Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:08.565134 containerd[1543]: time="2025-05-08T00:26:08.565083787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:08.567933 containerd[1543]: time="2025-05-08T00:26:08.567755308Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.963643392s" May 8 00:26:08.567933 containerd[1543]: time="2025-05-08T00:26:08.567791155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:26:09.525924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:26:09.536173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:09.703164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:09.706392 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:26:09.743623 kubelet[2235]: E0508 00:26:09.743568 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:26:09.746219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:26:09.746414 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:26:13.679958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:13.690353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:13.710360 systemd[1]: Reloading requested from client PID 2252 ('systemctl') (unit session-7.scope)... May 8 00:26:13.710374 systemd[1]: Reloading... May 8 00:26:13.764030 zram_generator::config[2288]: No configuration found. May 8 00:26:13.866349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:26:13.921531 systemd[1]: Reloading finished in 210 ms. May 8 00:26:13.959761 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:13.963847 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:26:13.964103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:13.970269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:14.052310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:14.057451 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:26:14.106784 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:26:14.106784 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:26:14.106784 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:26:14.107163 kubelet[2351]: I0508 00:26:14.106881 2351 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:26:14.932278 kubelet[2351]: I0508 00:26:14.932233 2351 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:26:14.932278 kubelet[2351]: I0508 00:26:14.932265 2351 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:26:14.932483 kubelet[2351]: I0508 00:26:14.932468 2351 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:26:14.987175 kubelet[2351]: E0508 00:26:14.987140 2351 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:14.990647 kubelet[2351]: I0508 00:26:14.987662 2351 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:26:15.005244 kubelet[2351]: I0508 00:26:15.004815 2351 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:26:15.006332 kubelet[2351]: I0508 00:26:15.006273 2351 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:26:15.006509 kubelet[2351]: I0508 00:26:15.006329 2351 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:26:15.006593 kubelet[2351]: I0508 00:26:15.006578 2351 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:26:15.006593 kubelet[2351]: I0508 00:26:15.006587 2351 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:26:15.006915 kubelet[2351]: I0508 00:26:15.006891 2351 state_mem.go:36] "Initialized new in-memory state store" May 8 00:26:15.008078 kubelet[2351]: I0508 00:26:15.008048 2351 kubelet.go:400] "Attempting to sync node with API server" May 8 00:26:15.008110 kubelet[2351]: I0508 00:26:15.008100 2351 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:26:15.008332 kubelet[2351]: I0508 00:26:15.008315 2351 kubelet.go:312] "Adding apiserver pod source" May 8 00:26:15.008332 kubelet[2351]: I0508 00:26:15.008331 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:26:15.009079 kubelet[2351]: W0508 00:26:15.008570 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.009079 kubelet[2351]: E0508 00:26:15.008633 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.009079 kubelet[2351]: W0508 00:26:15.008978 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.009079 kubelet[2351]: E0508 00:26:15.009031 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.013179 kubelet[2351]: I0508 00:26:15.013083 2351 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:26:15.014029 kubelet[2351]: I0508 00:26:15.013604 2351 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:26:15.014029 kubelet[2351]: W0508 00:26:15.013852 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:26:15.014682 kubelet[2351]: I0508 00:26:15.014639 2351 server.go:1264] "Started kubelet" May 8 00:26:15.014949 kubelet[2351]: I0508 00:26:15.014879 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:26:15.016104 kubelet[2351]: I0508 00:26:15.015935 2351 server.go:455] "Adding debug handlers to kubelet server" May 8 00:26:15.020041 kubelet[2351]: I0508 00:26:15.017742 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:26:15.020041 kubelet[2351]: I0508 00:26:15.018041 2351 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:26:15.020041 kubelet[2351]: I0508 00:26:15.018285 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:26:15.023541 kubelet[2351]: E0508 00:26:15.021851 2351 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:26:15.023541 kubelet[2351]: I0508 00:26:15.022033 2351 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:26:15.023541 kubelet[2351]: I0508 00:26:15.022139 2351 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:26:15.023541 kubelet[2351]: I0508 00:26:15.022212 2351 reconciler.go:26] "Reconciler: start to sync state" May 8 00:26:15.023541 kubelet[2351]: W0508 00:26:15.022490 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.023541 kubelet[2351]: E0508 00:26:15.022533 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.023541 kubelet[2351]: E0508 00:26:15.018171 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d65a18cb7d871 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:26:15.014619249 +0000 UTC m=+0.953382673,LastTimestamp:2025-05-08 00:26:15.014619249 +0000 UTC m=+0.953382673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:26:15.025920 kubelet[2351]: I0508 00:26:15.025595 2351 factory.go:221] Registration of the systemd container factory successfully May 8 00:26:15.025920 kubelet[2351]: I0508 00:26:15.025690 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:26:15.026531 kubelet[2351]: E0508 00:26:15.026494 2351 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:26:15.026812 kubelet[2351]: I0508 00:26:15.026784 2351 factory.go:221] Registration of the containerd container factory successfully May 8 00:26:15.027045 kubelet[2351]: E0508 00:26:15.027010 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" May 8 00:26:15.041382 kubelet[2351]: I0508 00:26:15.041233 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:26:15.043091 kubelet[2351]: I0508 00:26:15.042496 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:26:15.043091 kubelet[2351]: I0508 00:26:15.043047 2351 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:26:15.043091 kubelet[2351]: I0508 00:26:15.043077 2351 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:26:15.043251 kubelet[2351]: E0508 00:26:15.043123 2351 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:26:15.044546 kubelet[2351]: W0508 00:26:15.044499 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.044546 kubelet[2351]: E0508 00:26:15.044541 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.045557 kubelet[2351]: I0508 00:26:15.045529 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:26:15.045557 kubelet[2351]: I0508 00:26:15.045547 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:26:15.045647 kubelet[2351]: I0508 00:26:15.045566 2351 state_mem.go:36] "Initialized new in-memory state store" May 8 00:26:15.123507 kubelet[2351]: I0508 00:26:15.123468 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:15.123843 kubelet[2351]: E0508 00:26:15.123772 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" May 8 00:26:15.144020 kubelet[2351]: E0508 00:26:15.143972 2351 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:26:15.195537 kubelet[2351]: I0508 00:26:15.195453 2351 policy_none.go:49] "None policy: Start" May 8 00:26:15.196371 kubelet[2351]: I0508 00:26:15.196342 2351 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:26:15.196371 kubelet[2351]: I0508 00:26:15.196370 2351 state_mem.go:35] "Initializing new in-memory state store" May 8 00:26:15.203205 kubelet[2351]: I0508 00:26:15.203149 2351 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:26:15.203459 kubelet[2351]: I0508 00:26:15.203423 2351 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:26:15.203549 kubelet[2351]: I0508 00:26:15.203539 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:26:15.204817 kubelet[2351]: E0508 00:26:15.204797 2351 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:26:15.227583 kubelet[2351]: E0508 00:26:15.227525 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" May 8 00:26:15.324938 kubelet[2351]: I0508 00:26:15.324906 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:15.325262 kubelet[2351]: E0508 00:26:15.325227 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" May 8 00:26:15.344660 kubelet[2351]: I0508 00:26:15.344591 2351 topology_manager.go:215] "Topology Admit Handler" podUID="66bb29d6eeffe5e86a3324636da2b4fa" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:26:15.348683 kubelet[2351]: I0508 00:26:15.348566 2351 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:26:15.360280 kubelet[2351]: I0508 00:26:15.360158 2351 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:26:15.425010 kubelet[2351]: I0508 00:26:15.424934 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:15.425010 kubelet[2351]: I0508 00:26:15.425004 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:15.425208 kubelet[2351]: I0508 00:26:15.425026 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:15.425208 kubelet[2351]: I0508 00:26:15.425045 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:15.425208 kubelet[2351]: I0508 00:26:15.425063 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:26:15.425208 kubelet[2351]: I0508 00:26:15.425081 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:15.425208 kubelet[2351]: I0508 00:26:15.425096 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:15.425343 kubelet[2351]: I0508 00:26:15.425110 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:15.425343 kubelet[2351]: I0508 00:26:15.425124 2351 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:15.628331 kubelet[2351]: E0508 00:26:15.628278 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" May 8 00:26:15.661870 kubelet[2351]: E0508 00:26:15.661827 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:15.662513 containerd[1543]: time="2025-05-08T00:26:15.662476425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66bb29d6eeffe5e86a3324636da2b4fa,Namespace:kube-system,Attempt:0,}" May 8 00:26:15.663775 kubelet[2351]: E0508 00:26:15.663614 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:15.664196 containerd[1543]: time="2025-05-08T00:26:15.664003078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:26:15.664334 kubelet[2351]: E0508 00:26:15.664318 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:15.664620 containerd[1543]: time="2025-05-08T00:26:15.664595537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:26:15.727270 kubelet[2351]: I0508 00:26:15.727240 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:15.727912 kubelet[2351]: E0508 00:26:15.727649 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" May 8 00:26:15.909927 kubelet[2351]: W0508 00:26:15.909800 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.909927 kubelet[2351]: E0508 00:26:15.909864 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.938714 kubelet[2351]: W0508 00:26:15.938646 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:15.938714 kubelet[2351]: E0508 00:26:15.938702 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:16.063654 kubelet[2351]: W0508 00:26:16.063568 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:16.063654 kubelet[2351]: E0508 00:26:16.063645 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:16.214367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374234402.mount: Deactivated successfully. May 8 00:26:16.219832 containerd[1543]: time="2025-05-08T00:26:16.219787557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:26:16.220623 containerd[1543]: time="2025-05-08T00:26:16.220592073Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:26:16.221156 containerd[1543]: time="2025-05-08T00:26:16.220980525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:26:16.221628 containerd[1543]: time="2025-05-08T00:26:16.221601881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:26:16.222137 containerd[1543]: time="2025-05-08T00:26:16.222112147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:26:16.223020 containerd[1543]: time="2025-05-08T00:26:16.222963644Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:26:16.223258 containerd[1543]: time="2025-05-08T00:26:16.223234124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:26:16.226021 containerd[1543]: time="2025-05-08T00:26:16.225976659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:26:16.227522 containerd[1543]: time="2025-05-08T00:26:16.227495092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.434265ms" May 8 00:26:16.230128 containerd[1543]: time="2025-05-08T00:26:16.230090121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.529975ms" May 8 00:26:16.230877 containerd[1543]: time="2025-05-08T00:26:16.230856221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.205896ms" May 8 00:26:16.270815 kubelet[2351]: W0508 00:26:16.270749 2351 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:16.270815 kubelet[2351]: E0508 00:26:16.270813 2351 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused May 8 00:26:16.369028 containerd[1543]: time="2025-05-08T00:26:16.368942239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:16.369028 containerd[1543]: time="2025-05-08T00:26:16.368953404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:16.369167 containerd[1543]: time="2025-05-08T00:26:16.369023836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:16.369167 containerd[1543]: time="2025-05-08T00:26:16.369051248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.369167 containerd[1543]: time="2025-05-08T00:26:16.369116036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:16.369399 containerd[1543]: time="2025-05-08T00:26:16.369163778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.369399 containerd[1543]: time="2025-05-08T00:26:16.369282230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.369561 containerd[1543]: time="2025-05-08T00:26:16.369376392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:16.369561 containerd[1543]: time="2025-05-08T00:26:16.369419771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:16.369561 containerd[1543]: time="2025-05-08T00:26:16.369430136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.369561 containerd[1543]: time="2025-05-08T00:26:16.369495685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.369782 containerd[1543]: time="2025-05-08T00:26:16.369692732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:16.414137 containerd[1543]: time="2025-05-08T00:26:16.414095484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"31a2919134886b90866eac330628f497e255e611617f3c4f6f3ca1638190f506\"" May 8 00:26:16.414399 containerd[1543]: time="2025-05-08T00:26:16.414308979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66bb29d6eeffe5e86a3324636da2b4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8c5aca892963055799cc0696a4c55c2a9c59189ea96bd0587ad2f7d0e6644e3\"" May 8 00:26:16.415187 kubelet[2351]: E0508 00:26:16.415149 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:16.417006 kubelet[2351]: E0508 00:26:16.416063 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:16.419498 containerd[1543]: time="2025-05-08T00:26:16.419466264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d7515e6539dcf5f8f4f0bf767c7fd316a0973899e4e7b790abf6aa3c47929f9\"" May 8 00:26:16.421134 kubelet[2351]: E0508 00:26:16.421104 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:16.422047 containerd[1543]: time="2025-05-08T00:26:16.422014113Z" level=info msg="CreateContainer within sandbox \"31a2919134886b90866eac330628f497e255e611617f3c4f6f3ca1638190f506\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:26:16.422743 containerd[1543]: time="2025-05-08T00:26:16.422707820Z" level=info msg="CreateContainer within sandbox \"4d7515e6539dcf5f8f4f0bf767c7fd316a0973899e4e7b790abf6aa3c47929f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:26:16.422868 containerd[1543]: time="2025-05-08T00:26:16.422849123Z" level=info msg="CreateContainer within sandbox \"f8c5aca892963055799cc0696a4c55c2a9c59189ea96bd0587ad2f7d0e6644e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:26:16.429454 kubelet[2351]: E0508 00:26:16.429405 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" May 8 00:26:16.437944 containerd[1543]: time="2025-05-08T00:26:16.437910356Z" level=info msg="CreateContainer within sandbox \"31a2919134886b90866eac330628f497e255e611617f3c4f6f3ca1638190f506\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26f3b073383e328c529a129c57148b19ad385b124c09f1d3e7c43dc5fcf6fa93\"" May 8 00:26:16.438593 containerd[1543]: time="2025-05-08T00:26:16.438560924Z" level=info msg="StartContainer for \"26f3b073383e328c529a129c57148b19ad385b124c09f1d3e7c43dc5fcf6fa93\"" May 8 00:26:16.441719 containerd[1543]: time="2025-05-08T00:26:16.441681146Z" level=info msg="CreateContainer within sandbox \"f8c5aca892963055799cc0696a4c55c2a9c59189ea96bd0587ad2f7d0e6644e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aafb937bc696657fa8cf71681611ad06c21694e9ce923a17de5a4ef7597e5edf\"" May 8 00:26:16.442093 containerd[1543]: time="2025-05-08T00:26:16.442069678Z" level=info msg="StartContainer for \"aafb937bc696657fa8cf71681611ad06c21694e9ce923a17de5a4ef7597e5edf\"" May 8 00:26:16.443033 containerd[1543]: time="2025-05-08T00:26:16.442970758Z" level=info msg="CreateContainer within sandbox \"4d7515e6539dcf5f8f4f0bf767c7fd316a0973899e4e7b790abf6aa3c47929f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"643356205bc0453cc8bf62b9861853d1ed5f8f0dd855ce0681ae836ed384e3fe\"" May 8 00:26:16.444005 containerd[1543]: time="2025-05-08T00:26:16.443547293Z" level=info msg="StartContainer for \"643356205bc0453cc8bf62b9861853d1ed5f8f0dd855ce0681ae836ed384e3fe\"" May 8 00:26:16.509604 containerd[1543]: time="2025-05-08T00:26:16.509405351Z" level=info msg="StartContainer for \"643356205bc0453cc8bf62b9861853d1ed5f8f0dd855ce0681ae836ed384e3fe\" returns successfully" May 8 00:26:16.509604 containerd[1543]: time="2025-05-08T00:26:16.509419477Z" level=info msg="StartContainer for \"aafb937bc696657fa8cf71681611ad06c21694e9ce923a17de5a4ef7597e5edf\" returns successfully" May 8 00:26:16.509604 containerd[1543]: time="2025-05-08T00:26:16.509423519Z" level=info msg="StartContainer for \"26f3b073383e328c529a129c57148b19ad385b124c09f1d3e7c43dc5fcf6fa93\" returns successfully" May 8 00:26:16.533766 kubelet[2351]: I0508 00:26:16.533734 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:16.534224 kubelet[2351]: E0508 00:26:16.534190 2351 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" May 8 00:26:17.051658 kubelet[2351]: E0508 00:26:17.051635 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:17.054026 kubelet[2351]: E0508 00:26:17.053769 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:17.055714 kubelet[2351]: E0508 00:26:17.055689 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:18.058950 kubelet[2351]: E0508 00:26:18.058901 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:18.137543 kubelet[2351]: I0508 00:26:18.137518 2351 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:18.295215 kubelet[2351]: E0508 00:26:18.295178 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:26:18.461759 kubelet[2351]: I0508 00:26:18.461666 2351 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:26:19.010634 kubelet[2351]: I0508 00:26:19.010579 2351 apiserver.go:52] "Watching apiserver" May 8 00:26:19.022776 kubelet[2351]: I0508 00:26:19.022730 2351 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:26:19.063570 kubelet[2351]: E0508 00:26:19.063511 2351 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:26:19.064045 kubelet[2351]: E0508 00:26:19.064020 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:20.523687 systemd[1]: Reloading requested from client PID 2626 ('systemctl') (unit session-7.scope)... May 8 00:26:20.523703 systemd[1]: Reloading... May 8 00:26:20.582026 zram_generator::config[2668]: No configuration found. May 8 00:26:20.666933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:26:20.725087 systemd[1]: Reloading finished in 201 ms. May 8 00:26:20.752033 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:20.765820 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:26:20.766159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:20.777433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:26:20.867036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:26:20.870800 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:26:20.906110 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:26:20.906110 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:26:20.906110 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:26:20.906443 kubelet[2717]: I0508 00:26:20.906150 2717 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:26:20.911277 kubelet[2717]: I0508 00:26:20.911241 2717 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:26:20.911479 kubelet[2717]: I0508 00:26:20.911376 2717 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:26:20.911660 kubelet[2717]: I0508 00:26:20.911645 2717 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:26:20.913598 kubelet[2717]: I0508 00:26:20.913562 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:26:20.917010 kubelet[2717]: I0508 00:26:20.916970 2717 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:26:20.923381 kubelet[2717]: I0508 00:26:20.923356 2717 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:26:20.923850 kubelet[2717]: I0508 00:26:20.923818 2717 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:26:20.924091 kubelet[2717]: I0508 00:26:20.923909 2717 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:26:20.925602 kubelet[2717]: I0508 00:26:20.924216 2717 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:26:20.925691 kubelet[2717]: I0508 00:26:20.925680 2717 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:26:20.926071 kubelet[2717]: I0508 00:26:20.925900 2717 state_mem.go:36] "Initialized new in-memory state store" May 8 00:26:20.926573 kubelet[2717]: I0508 00:26:20.926554 2717 kubelet.go:400] "Attempting to sync node with API server" May 8 00:26:20.926652 kubelet[2717]: I0508 00:26:20.926641 2717 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:26:20.926803 kubelet[2717]: I0508 00:26:20.926791 2717 kubelet.go:312] "Adding apiserver pod source" May 8 00:26:20.927428 kubelet[2717]: I0508 00:26:20.927412 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:26:20.933767 kubelet[2717]: I0508 00:26:20.933740 2717 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:26:20.934094 kubelet[2717]: I0508 00:26:20.933917 2717 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:26:20.934350 kubelet[2717]: I0508 00:26:20.934324 2717 server.go:1264] "Started kubelet" May 8 00:26:20.935933 kubelet[2717]: I0508 00:26:20.935806 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:26:20.938034 kubelet[2717]: I0508 00:26:20.937992 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:26:20.939188 kubelet[2717]: I0508 00:26:20.939163 2717 server.go:455] "Adding debug handlers to kubelet server" May 8 00:26:20.941061 kubelet[2717]: I0508 00:26:20.941011 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:26:20.941415 kubelet[2717]: I0508 00:26:20.941393 2717 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:26:20.942541 kubelet[2717]: I0508 00:26:20.942523 2717 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:26:20.942971 kubelet[2717]: I0508 00:26:20.942951 2717 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:26:20.943217 kubelet[2717]: I0508 00:26:20.943202 2717 reconciler.go:26] "Reconciler: start to sync state" May 8 00:26:20.951308 kubelet[2717]: I0508 00:26:20.950268 2717 factory.go:221] Registration of the systemd container factory successfully May 8 00:26:20.958258 kubelet[2717]: I0508 00:26:20.958214 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:26:20.958747 kubelet[2717]: E0508 00:26:20.958712 2717 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:26:20.962327 kubelet[2717]: I0508 00:26:20.962135 2717 factory.go:221] Registration of the containerd container factory successfully May 8 00:26:20.966511 kubelet[2717]: I0508 00:26:20.966456 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:26:20.968317 kubelet[2717]: I0508 00:26:20.968267 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:26:20.968317 kubelet[2717]: I0508 00:26:20.968314 2717 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:26:20.968387 kubelet[2717]: I0508 00:26:20.968333 2717 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:26:20.968413 kubelet[2717]: E0508 00:26:20.968389 2717 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:26:20.999801 kubelet[2717]: I0508 00:26:20.999769 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:26:20.999801 kubelet[2717]: I0508 00:26:20.999785 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:26:20.999801 kubelet[2717]: I0508 00:26:20.999802 2717 state_mem.go:36] "Initialized new in-memory state store" May 8 00:26:20.999949 kubelet[2717]: I0508 00:26:20.999925 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:26:20.999949 kubelet[2717]: I0508 00:26:20.999935 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:26:21.000001 kubelet[2717]: I0508 00:26:20.999952 2717 policy_none.go:49] "None policy: Start" May 8 00:26:21.000596 kubelet[2717]: I0508 00:26:21.000549 2717 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:26:21.000596 kubelet[2717]: I0508 00:26:21.000571 2717 state_mem.go:35] "Initializing new in-memory state store" May 8 00:26:21.000703 kubelet[2717]: I0508 00:26:21.000683 2717 state_mem.go:75] "Updated machine memory state" May 8 00:26:21.002416 kubelet[2717]: I0508 00:26:21.001703 2717 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:26:21.002416 kubelet[2717]: I0508 00:26:21.001948 2717 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:26:21.002416 kubelet[2717]: I0508 00:26:21.002297 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:26:21.046897 kubelet[2717]: I0508 00:26:21.046818 2717 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:26:21.054242 kubelet[2717]: I0508 00:26:21.054203 2717 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:26:21.054329 kubelet[2717]: I0508 00:26:21.054279 2717 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:26:21.069144 kubelet[2717]: I0508 00:26:21.069070 2717 topology_manager.go:215] "Topology Admit Handler" podUID="66bb29d6eeffe5e86a3324636da2b4fa" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:26:21.069246 kubelet[2717]: I0508 00:26:21.069225 2717 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:26:21.069269 kubelet[2717]: I0508 00:26:21.069263 2717 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:26:21.243936 kubelet[2717]: I0508 00:26:21.243887 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:21.244092 kubelet[2717]: I0508 00:26:21.243936 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:21.244092 kubelet[2717]: I0508 00:26:21.244011 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.244092 kubelet[2717]: I0508 00:26:21.244056 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.244092 kubelet[2717]: I0508 00:26:21.244079 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:26:21.244175 kubelet[2717]: I0508 00:26:21.244095 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66bb29d6eeffe5e86a3324636da2b4fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66bb29d6eeffe5e86a3324636da2b4fa\") " pod="kube-system/kube-apiserver-localhost" May 8 00:26:21.244175 kubelet[2717]: I0508 00:26:21.244112 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.244175 kubelet[2717]: I0508 00:26:21.244129 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.244175 kubelet[2717]: I0508 00:26:21.244142 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.376386 kubelet[2717]: E0508 00:26:21.376146 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:21.376386 kubelet[2717]: E0508 00:26:21.376186 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:21.377230 kubelet[2717]: E0508 00:26:21.377159 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:21.580421 sudo[2751]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:26:21.580704 sudo[2751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:26:21.927699 kubelet[2717]: I0508 00:26:21.927660 2717 apiserver.go:52] "Watching apiserver" May 8 00:26:21.943858 kubelet[2717]: I0508 00:26:21.943829 2717 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:26:21.978711 kubelet[2717]: E0508 00:26:21.978683 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:21.990815 kubelet[2717]: E0508 00:26:21.990778 2717 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:26:21.991293 kubelet[2717]: E0508 00:26:21.991271 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:21.995185 kubelet[2717]: E0508 00:26:21.994910 2717 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:26:21.995654 kubelet[2717]: E0508 00:26:21.995369 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:22.005452 kubelet[2717]: I0508 00:26:22.004786 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.004743467 podStartE2EDuration="1.004743467s" podCreationTimestamp="2025-05-08 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:22.004685933 +0000 UTC m=+1.130537898" watchObservedRunningTime="2025-05-08 00:26:22.004743467 +0000 UTC m=+1.130595432" May 8 00:26:22.010710 sudo[2751]: pam_unix(sudo:session): session closed for user root May 8 00:26:22.021133 kubelet[2717]: I0508 00:26:22.020959 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.020928686 podStartE2EDuration="1.020928686s" podCreationTimestamp="2025-05-08 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:22.012731952 +0000 UTC m=+1.138583917" watchObservedRunningTime="2025-05-08 00:26:22.020928686 +0000 UTC m=+1.146780611" May 8 00:26:22.021133 kubelet[2717]: I0508 00:26:22.021049 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.021043953 podStartE2EDuration="1.021043953s" podCreationTimestamp="2025-05-08 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:22.020696991 +0000 UTC m=+1.146548956" watchObservedRunningTime="2025-05-08 00:26:22.021043953 +0000 UTC m=+1.146895918" May 8 00:26:22.980388 kubelet[2717]: E0508 00:26:22.980359 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:22.980721 kubelet[2717]: E0508 00:26:22.980547 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:23.719031 sudo[1750]: pam_unix(sudo:session): session closed for user root May 8 00:26:23.721404 sshd[1743]: pam_unix(sshd:session): session closed for user core May 8 00:26:23.724083 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:55780.service: Deactivated successfully. May 8 00:26:23.727161 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:26:23.729061 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. May 8 00:26:23.730736 systemd-logind[1524]: Removed session 7. May 8 00:26:23.982026 kubelet[2717]: E0508 00:26:23.981902 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:24.983402 kubelet[2717]: E0508 00:26:24.983372 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:26.016625 kubelet[2717]: E0508 00:26:26.016312 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:26.987021 kubelet[2717]: E0508 00:26:26.986965 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:27.187197 kubelet[2717]: E0508 00:26:27.187099 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:27.988426 kubelet[2717]: E0508 00:26:27.988337 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:32.104498 update_engine[1529]: I20250508 00:26:32.104424 1529 update_attempter.cc:509] Updating boot flags... May 8 00:26:32.125013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2802) May 8 00:26:32.148025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2804) May 8 00:26:34.937122 kubelet[2717]: E0508 00:26:34.936882 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:34.998901 kubelet[2717]: E0508 00:26:34.998855 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:36.189557 kubelet[2717]: I0508 00:26:36.189514 2717 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:26:36.190188 containerd[1543]: time="2025-05-08T00:26:36.190098699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:26:36.191184 kubelet[2717]: I0508 00:26:36.190607 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:26:37.116554 kubelet[2717]: I0508 00:26:37.116494 2717 topology_manager.go:215] "Topology Admit Handler" podUID="ae17269d-9f4f-4cf5-b269-7d21874932b2" podNamespace="kube-system" podName="kube-proxy-58wn6" May 8 00:26:37.136008 kubelet[2717]: I0508 00:26:37.129901 2717 topology_manager.go:215] "Topology Admit Handler" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" podNamespace="kube-system" podName="cilium-t5rcv" May 8 00:26:37.253626 kubelet[2717]: I0508 00:26:37.253571 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae17269d-9f4f-4cf5-b269-7d21874932b2-xtables-lock\") pod \"kube-proxy-58wn6\" (UID: \"ae17269d-9f4f-4cf5-b269-7d21874932b2\") " pod="kube-system/kube-proxy-58wn6" May 8 00:26:37.253626 kubelet[2717]: I0508 00:26:37.253615 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35fe04ae-d67f-4996-b89c-78a10e1c1691-clustermesh-secrets\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.253626 kubelet[2717]: I0508 00:26:37.253636 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rrt\" (UniqueName: \"kubernetes.io/projected/ae17269d-9f4f-4cf5-b269-7d21874932b2-kube-api-access-w2rrt\") pod \"kube-proxy-58wn6\" (UID: \"ae17269d-9f4f-4cf5-b269-7d21874932b2\") " pod="kube-system/kube-proxy-58wn6" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253657 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-hubble-tls\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253715 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-etc-cni-netd\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253752 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-net\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253778 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-kernel\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253812 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twc9\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-kube-api-access-6twc9\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254058 kubelet[2717]: I0508 00:26:37.253835 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-hostproc\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253862 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-run\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253886 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae17269d-9f4f-4cf5-b269-7d21874932b2-lib-modules\") pod \"kube-proxy-58wn6\" (UID: \"ae17269d-9f4f-4cf5-b269-7d21874932b2\") " pod="kube-system/kube-proxy-58wn6" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253907 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-bpf-maps\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253929 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-cgroup\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253946 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cni-path\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254204 kubelet[2717]: I0508 00:26:37.253975 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-xtables-lock\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254319 kubelet[2717]: I0508 00:26:37.254007 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-config-path\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.254319 kubelet[2717]: I0508 00:26:37.254047 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae17269d-9f4f-4cf5-b269-7d21874932b2-kube-proxy\") pod \"kube-proxy-58wn6\" (UID: \"ae17269d-9f4f-4cf5-b269-7d21874932b2\") " pod="kube-system/kube-proxy-58wn6" May 8 00:26:37.254319 kubelet[2717]: I0508 00:26:37.254075 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-lib-modules\") pod \"cilium-t5rcv\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " pod="kube-system/cilium-t5rcv" May 8 00:26:37.308500 kubelet[2717]: I0508 00:26:37.308439 2717 topology_manager.go:215] "Topology Admit Handler" podUID="d72e40fb-e48e-47d9-91f3-3f01f9103004" podNamespace="kube-system" podName="cilium-operator-599987898-95lpr" May 8 00:26:37.422664 kubelet[2717]: E0508 00:26:37.422547 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:37.429473 containerd[1543]: time="2025-05-08T00:26:37.429304114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58wn6,Uid:ae17269d-9f4f-4cf5-b269-7d21874932b2,Namespace:kube-system,Attempt:0,}" May 8 00:26:37.441594 kubelet[2717]: E0508 00:26:37.441557 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:37.443498 containerd[1543]: time="2025-05-08T00:26:37.443445680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5rcv,Uid:35fe04ae-d67f-4996-b89c-78a10e1c1691,Namespace:kube-system,Attempt:0,}" May 8 00:26:37.448262 containerd[1543]: time="2025-05-08T00:26:37.448174190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:37.448262 containerd[1543]: time="2025-05-08T00:26:37.448229796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:37.448262 containerd[1543]: time="2025-05-08T00:26:37.448252518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.448403 containerd[1543]: time="2025-05-08T00:26:37.448358290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.459206 kubelet[2717]: I0508 00:26:37.458150 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d72e40fb-e48e-47d9-91f3-3f01f9103004-cilium-config-path\") pod \"cilium-operator-599987898-95lpr\" (UID: \"d72e40fb-e48e-47d9-91f3-3f01f9103004\") " pod="kube-system/cilium-operator-599987898-95lpr" May 8 00:26:37.459206 kubelet[2717]: I0508 00:26:37.458193 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8r9t\" (UniqueName: \"kubernetes.io/projected/d72e40fb-e48e-47d9-91f3-3f01f9103004-kube-api-access-s8r9t\") pod \"cilium-operator-599987898-95lpr\" (UID: \"d72e40fb-e48e-47d9-91f3-3f01f9103004\") " pod="kube-system/cilium-operator-599987898-95lpr" May 8 00:26:37.462324 containerd[1543]: time="2025-05-08T00:26:37.462223265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:37.462324 containerd[1543]: time="2025-05-08T00:26:37.462290272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:37.462324 containerd[1543]: time="2025-05-08T00:26:37.462305274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.462534 containerd[1543]: time="2025-05-08T00:26:37.462391683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.486243 containerd[1543]: time="2025-05-08T00:26:37.486196451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58wn6,Uid:ae17269d-9f4f-4cf5-b269-7d21874932b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f42c7a01f632681aeb43da5b1642e8826927edde2272c17f51c4dfee1f7e17\"" May 8 00:26:37.486831 kubelet[2717]: E0508 00:26:37.486795 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:37.492733 containerd[1543]: time="2025-05-08T00:26:37.492696632Z" level=info msg="CreateContainer within sandbox \"93f42c7a01f632681aeb43da5b1642e8826927edde2272c17f51c4dfee1f7e17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:26:37.493518 containerd[1543]: time="2025-05-08T00:26:37.493489437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5rcv,Uid:35fe04ae-d67f-4996-b89c-78a10e1c1691,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\"" May 8 00:26:37.494051 kubelet[2717]: E0508 00:26:37.494029 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:37.496769 containerd[1543]: time="2025-05-08T00:26:37.496739588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:26:37.507275 containerd[1543]: time="2025-05-08T00:26:37.507214518Z" level=info msg="CreateContainer within sandbox \"93f42c7a01f632681aeb43da5b1642e8826927edde2272c17f51c4dfee1f7e17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"505cdb7836bbb352f52e8f933690b69fd11401f819132c3b2c9a1095571517b5\"" May 8 00:26:37.509942 containerd[1543]: time="2025-05-08T00:26:37.509900127Z" level=info msg="StartContainer for \"505cdb7836bbb352f52e8f933690b69fd11401f819132c3b2c9a1095571517b5\"" May 8 00:26:37.569588 containerd[1543]: time="2025-05-08T00:26:37.569135356Z" level=info msg="StartContainer for \"505cdb7836bbb352f52e8f933690b69fd11401f819132c3b2c9a1095571517b5\" returns successfully" May 8 00:26:37.611797 kubelet[2717]: E0508 00:26:37.611750 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:37.626941 containerd[1543]: time="2025-05-08T00:26:37.626894266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-95lpr,Uid:d72e40fb-e48e-47d9-91f3-3f01f9103004,Namespace:kube-system,Attempt:0,}" May 8 00:26:37.769722 containerd[1543]: time="2025-05-08T00:26:37.769402516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:37.769722 containerd[1543]: time="2025-05-08T00:26:37.769606498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:37.770135 containerd[1543]: time="2025-05-08T00:26:37.769914091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.770691 containerd[1543]: time="2025-05-08T00:26:37.770606966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:37.812791 containerd[1543]: time="2025-05-08T00:26:37.812753471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-95lpr,Uid:d72e40fb-e48e-47d9-91f3-3f01f9103004,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\"" May 8 00:26:37.813534 kubelet[2717]: E0508 00:26:37.813502 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:38.018248 kubelet[2717]: E0508 00:26:38.018191 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:40.208179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362816350.mount: Deactivated successfully. May 8 00:26:41.037126 kubelet[2717]: I0508 00:26:41.037073 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-58wn6" podStartSLOduration=4.037055124 podStartE2EDuration="4.037055124s" podCreationTimestamp="2025-05-08 00:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:38.029127988 +0000 UTC m=+17.154979993" watchObservedRunningTime="2025-05-08 00:26:41.037055124 +0000 UTC m=+20.162907089" May 8 00:26:41.486213 containerd[1543]: time="2025-05-08T00:26:41.485203018Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:41.486213 containerd[1543]: time="2025-05-08T00:26:41.485632057Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 00:26:41.486780 containerd[1543]: time="2025-05-08T00:26:41.486757238Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:41.492795 containerd[1543]: time="2025-05-08T00:26:41.492760219Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.995982907s" May 8 00:26:41.492904 containerd[1543]: time="2025-05-08T00:26:41.492887550Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:26:41.495009 containerd[1543]: time="2025-05-08T00:26:41.494957736Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:26:41.501593 containerd[1543]: time="2025-05-08T00:26:41.501532848Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:26:41.526327 containerd[1543]: time="2025-05-08T00:26:41.526254393Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\"" May 8 00:26:41.527042 containerd[1543]: time="2025-05-08T00:26:41.526797722Z" level=info msg="StartContainer for \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\"" May 8 00:26:41.572154 containerd[1543]: time="2025-05-08T00:26:41.572115601Z" level=info msg="StartContainer for \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\" returns successfully" May 8 00:26:41.729174 containerd[1543]: time="2025-05-08T00:26:41.726848727Z" level=info msg="shim disconnected" id=c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e namespace=k8s.io May 8 00:26:41.729174 containerd[1543]: time="2025-05-08T00:26:41.729169536Z" level=warning msg="cleaning up after shim disconnected" id=c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e namespace=k8s.io May 8 00:26:41.729174 containerd[1543]: time="2025-05-08T00:26:41.729182257Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:26:42.023683 kubelet[2717]: E0508 00:26:42.022725 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:42.033319 containerd[1543]: time="2025-05-08T00:26:42.033272030Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:26:42.045378 containerd[1543]: time="2025-05-08T00:26:42.045177177Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\"" May 8 00:26:42.047024 containerd[1543]: time="2025-05-08T00:26:42.046971571Z" level=info msg="StartContainer for \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\"" May 8 00:26:42.089057 containerd[1543]: time="2025-05-08T00:26:42.088956431Z" level=info msg="StartContainer for \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\" returns successfully" May 8 00:26:42.112679 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:26:42.112932 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:26:42.113012 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:26:42.123312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:26:42.137264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:26:42.139664 containerd[1543]: time="2025-05-08T00:26:42.139605718Z" level=info msg="shim disconnected" id=0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed namespace=k8s.io May 8 00:26:42.139664 containerd[1543]: time="2025-05-08T00:26:42.139661322Z" level=warning msg="cleaning up after shim disconnected" id=0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed namespace=k8s.io May 8 00:26:42.139664 containerd[1543]: time="2025-05-08T00:26:42.139670483Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:26:42.523070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e-rootfs.mount: Deactivated successfully. May 8 00:26:43.022940 containerd[1543]: time="2025-05-08T00:26:43.022825350Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:43.023967 containerd[1543]: time="2025-05-08T00:26:43.023741466Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 00:26:43.024717 containerd[1543]: time="2025-05-08T00:26:43.024686664Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:26:43.027262 containerd[1543]: time="2025-05-08T00:26:43.027009936Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.532006435s" May 8 00:26:43.027262 containerd[1543]: time="2025-05-08T00:26:43.027045499Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:26:43.028374 kubelet[2717]: E0508 00:26:43.028332 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:43.032282 containerd[1543]: time="2025-05-08T00:26:43.032096557Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:26:43.032782 containerd[1543]: time="2025-05-08T00:26:43.032750051Z" level=info msg="CreateContainer within sandbox \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:26:43.054385 containerd[1543]: time="2025-05-08T00:26:43.054330794Z" level=info msg="CreateContainer within sandbox \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\"" May 8 00:26:43.056932 containerd[1543]: time="2025-05-08T00:26:43.056882885Z" level=info msg="StartContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\"" May 8 00:26:43.059308 containerd[1543]: time="2025-05-08T00:26:43.059271003Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\"" May 8 00:26:43.060147 containerd[1543]: time="2025-05-08T00:26:43.060071749Z" level=info msg="StartContainer for \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\"" May 8 00:26:43.105938 containerd[1543]: time="2025-05-08T00:26:43.105898737Z" level=info msg="StartContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" returns successfully" May 8 00:26:43.106216 containerd[1543]: time="2025-05-08T00:26:43.105898857Z" level=info msg="StartContainer for \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\" returns successfully" May 8 00:26:43.225379 containerd[1543]: time="2025-05-08T00:26:43.225304368Z" level=info msg="shim disconnected" id=4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a namespace=k8s.io May 8 00:26:43.225379 containerd[1543]: time="2025-05-08T00:26:43.225373213Z" level=warning msg="cleaning up after shim disconnected" id=4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a namespace=k8s.io May 8 00:26:43.225379 containerd[1543]: time="2025-05-08T00:26:43.225381974Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:26:44.031753 kubelet[2717]: E0508 00:26:44.030945 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:44.036033 kubelet[2717]: E0508 00:26:44.036003 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:44.040049 containerd[1543]: time="2025-05-08T00:26:44.039593911Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:26:44.059584 kubelet[2717]: I0508 00:26:44.059256 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-95lpr" podStartSLOduration=1.8453839950000002 podStartE2EDuration="7.05923859s" podCreationTimestamp="2025-05-08 00:26:37 +0000 UTC" firstStartedPulling="2025-05-08 00:26:37.814214429 +0000 UTC m=+16.940066394" lastFinishedPulling="2025-05-08 00:26:43.028069024 +0000 UTC m=+22.153920989" observedRunningTime="2025-05-08 00:26:44.03996218 +0000 UTC m=+23.165814145" watchObservedRunningTime="2025-05-08 00:26:44.05923859 +0000 UTC m=+23.185090555" May 8 00:26:44.061316 containerd[1543]: time="2025-05-08T00:26:44.061244349Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\"" May 8 00:26:44.062713 containerd[1543]: time="2025-05-08T00:26:44.062673902Z" level=info msg="StartContainer for \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\"" May 8 00:26:44.115072 containerd[1543]: time="2025-05-08T00:26:44.114745193Z" level=info msg="StartContainer for \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\" returns successfully" May 8 00:26:44.132599 containerd[1543]: time="2025-05-08T00:26:44.132533444Z" level=info msg="shim disconnected" id=ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c namespace=k8s.io May 8 00:26:44.132848 containerd[1543]: time="2025-05-08T00:26:44.132829108Z" level=warning msg="cleaning up after shim disconnected" id=ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c namespace=k8s.io May 8 00:26:44.132928 containerd[1543]: time="2025-05-08T00:26:44.132914115Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:26:44.525709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c-rootfs.mount: Deactivated successfully. May 8 00:26:45.040676 kubelet[2717]: E0508 00:26:45.040629 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:45.044144 kubelet[2717]: E0508 00:26:45.043866 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:45.048172 containerd[1543]: time="2025-05-08T00:26:45.048122814Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:26:45.060189 containerd[1543]: time="2025-05-08T00:26:45.060148130Z" level=info msg="CreateContainer within sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\"" May 8 00:26:45.061301 containerd[1543]: time="2025-05-08T00:26:45.060838543Z" level=info msg="StartContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\"" May 8 00:26:45.061079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185965743.mount: Deactivated successfully. May 8 00:26:45.105476 containerd[1543]: time="2025-05-08T00:26:45.105424621Z" level=info msg="StartContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" returns successfully" May 8 00:26:45.218259 kubelet[2717]: I0508 00:26:45.218221 2717 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:26:45.260084 kubelet[2717]: I0508 00:26:45.259927 2717 topology_manager.go:215] "Topology Admit Handler" podUID="018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5pd5g" May 8 00:26:45.262000 kubelet[2717]: I0508 00:26:45.261938 2717 topology_manager.go:215] "Topology Admit Handler" podUID="02fe5119-9f3b-4396-9f27-66699eb66880" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fpzx8" May 8 00:26:45.312078 kubelet[2717]: I0508 00:26:45.311614 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02fe5119-9f3b-4396-9f27-66699eb66880-config-volume\") pod \"coredns-7db6d8ff4d-fpzx8\" (UID: \"02fe5119-9f3b-4396-9f27-66699eb66880\") " pod="kube-system/coredns-7db6d8ff4d-fpzx8" May 8 00:26:45.312078 kubelet[2717]: I0508 00:26:45.311670 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9-config-volume\") pod \"coredns-7db6d8ff4d-5pd5g\" (UID: \"018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9\") " pod="kube-system/coredns-7db6d8ff4d-5pd5g" May 8 00:26:45.312078 kubelet[2717]: I0508 00:26:45.311715 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzkj\" (UniqueName: \"kubernetes.io/projected/02fe5119-9f3b-4396-9f27-66699eb66880-kube-api-access-trzkj\") pod \"coredns-7db6d8ff4d-fpzx8\" (UID: \"02fe5119-9f3b-4396-9f27-66699eb66880\") " pod="kube-system/coredns-7db6d8ff4d-fpzx8" May 8 00:26:45.312078 kubelet[2717]: I0508 00:26:45.311738 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78wx\" (UniqueName: \"kubernetes.io/projected/018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9-kube-api-access-f78wx\") pod \"coredns-7db6d8ff4d-5pd5g\" (UID: \"018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9\") " pod="kube-system/coredns-7db6d8ff4d-5pd5g" May 8 00:26:45.571964 kubelet[2717]: E0508 00:26:45.571668 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:45.573474 containerd[1543]: time="2025-05-08T00:26:45.573094502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fpzx8,Uid:02fe5119-9f3b-4396-9f27-66699eb66880,Namespace:kube-system,Attempt:0,}" May 8 00:26:45.574423 kubelet[2717]: E0508 00:26:45.574181 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:45.574580 containerd[1543]: time="2025-05-08T00:26:45.574529252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pd5g,Uid:018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9,Namespace:kube-system,Attempt:0,}" May 8 00:26:46.051852 kubelet[2717]: E0508 00:26:46.051809 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:46.067871 kubelet[2717]: I0508 00:26:46.067813 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t5rcv" podStartSLOduration=5.068244593 podStartE2EDuration="9.067794407s" podCreationTimestamp="2025-05-08 00:26:37 +0000 UTC" firstStartedPulling="2025-05-08 00:26:37.49527291 +0000 UTC m=+16.621124875" lastFinishedPulling="2025-05-08 00:26:41.494822724 +0000 UTC m=+20.620674689" observedRunningTime="2025-05-08 00:26:46.066585798 +0000 UTC m=+25.192437723" watchObservedRunningTime="2025-05-08 00:26:46.067794407 +0000 UTC m=+25.193646372" May 8 00:26:47.061186 kubelet[2717]: E0508 00:26:47.061096 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:47.326448 systemd-networkd[1232]: cilium_host: Link UP May 8 00:26:47.326574 systemd-networkd[1232]: cilium_net: Link UP May 8 00:26:47.326727 systemd-networkd[1232]: cilium_net: Gained carrier May 8 00:26:47.326866 systemd-networkd[1232]: cilium_host: Gained carrier May 8 00:26:47.326970 systemd-networkd[1232]: cilium_net: Gained IPv6LL May 8 00:26:47.327116 systemd-networkd[1232]: cilium_host: Gained IPv6LL May 8 00:26:47.431954 systemd-networkd[1232]: cilium_vxlan: Link UP May 8 00:26:47.431964 systemd-networkd[1232]: cilium_vxlan: Gained carrier May 8 00:26:47.783026 kernel: NET: Registered PF_ALG protocol family May 8 00:26:48.056832 kubelet[2717]: E0508 00:26:48.056621 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:48.079199 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:60698.service - OpenSSH per-connection server daemon (10.0.0.1:60698). May 8 00:26:48.114144 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 60698 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:26:48.115499 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:26:48.120274 systemd-logind[1524]: New session 8 of user core. May 8 00:26:48.132242 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:26:48.264913 sshd[3758]: pam_unix(sshd:session): session closed for user core May 8 00:26:48.268635 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:60698.service: Deactivated successfully. May 8 00:26:48.270893 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. May 8 00:26:48.271009 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:26:48.273495 systemd-logind[1524]: Removed session 8. May 8 00:26:48.388668 systemd-networkd[1232]: lxc_health: Link UP May 8 00:26:48.398874 systemd-networkd[1232]: lxc_health: Gained carrier May 8 00:26:48.726556 systemd-networkd[1232]: lxcd6949216d454: Link UP May 8 00:26:48.735045 kernel: eth0: renamed from tmp75b66 May 8 00:26:48.739175 systemd-networkd[1232]: lxc9d87cce166d3: Link UP May 8 00:26:48.739668 systemd-networkd[1232]: lxcd6949216d454: Gained carrier May 8 00:26:48.746368 kernel: eth0: renamed from tmp7a2f9 May 8 00:26:48.753452 systemd-networkd[1232]: lxc9d87cce166d3: Gained carrier May 8 00:26:48.951181 systemd-networkd[1232]: cilium_vxlan: Gained IPv6LL May 8 00:26:49.459682 kubelet[2717]: E0508 00:26:49.459621 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:49.911182 systemd-networkd[1232]: lxc_health: Gained IPv6LL May 8 00:26:50.060097 kubelet[2717]: E0508 00:26:50.059704 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:50.167205 systemd-networkd[1232]: lxc9d87cce166d3: Gained IPv6LL May 8 00:26:50.167488 systemd-networkd[1232]: lxcd6949216d454: Gained IPv6LL May 8 00:26:51.064928 kubelet[2717]: E0508 00:26:51.064645 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:52.235659 containerd[1543]: time="2025-05-08T00:26:52.235535661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:52.235659 containerd[1543]: time="2025-05-08T00:26:52.235599105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:52.235659 containerd[1543]: time="2025-05-08T00:26:52.235625346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:52.236407 containerd[1543]: time="2025-05-08T00:26:52.236136057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:52.243713 containerd[1543]: time="2025-05-08T00:26:52.243512133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:26:52.244518 containerd[1543]: time="2025-05-08T00:26:52.244471030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:26:52.244585 containerd[1543]: time="2025-05-08T00:26:52.244524033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:52.244727 containerd[1543]: time="2025-05-08T00:26:52.244675442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:26:52.263218 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:26:52.265686 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:26:52.281434 containerd[1543]: time="2025-05-08T00:26:52.281338532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pd5g,Uid:018ef4d8-1373-4ae6-a94f-0bc8a6cf43d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2f9da22410a8ef2661d814dbad11ec520f95ed9a921373cc15541fbcaf428d\"" May 8 00:26:52.288020 kubelet[2717]: E0508 00:26:52.287416 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:52.291692 containerd[1543]: time="2025-05-08T00:26:52.291646622Z" level=info msg="CreateContainer within sandbox \"7a2f9da22410a8ef2661d814dbad11ec520f95ed9a921373cc15541fbcaf428d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:26:52.294525 containerd[1543]: time="2025-05-08T00:26:52.294484190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fpzx8,Uid:02fe5119-9f3b-4396-9f27-66699eb66880,Namespace:kube-system,Attempt:0,} returns sandbox id \"75b66a0d34816c56ba001e020a89426fb6f48eb7a9baf15a33ca29b92483b7c4\"" May 8 00:26:52.295839 kubelet[2717]: E0508 00:26:52.295795 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:52.298855 containerd[1543]: time="2025-05-08T00:26:52.298816807Z" level=info msg="CreateContainer within sandbox \"75b66a0d34816c56ba001e020a89426fb6f48eb7a9baf15a33ca29b92483b7c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:26:52.304506 containerd[1543]: time="2025-05-08T00:26:52.304474382Z" level=info msg="CreateContainer within sandbox \"7a2f9da22410a8ef2661d814dbad11ec520f95ed9a921373cc15541fbcaf428d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a60ee67631b1addb311a249d18deabcb98333461bc93398d26b3d4e32d363f16\"" May 8 00:26:52.305435 containerd[1543]: time="2025-05-08T00:26:52.305062617Z" level=info msg="StartContainer for \"a60ee67631b1addb311a249d18deabcb98333461bc93398d26b3d4e32d363f16\"" May 8 00:26:52.311186 containerd[1543]: time="2025-05-08T00:26:52.310541581Z" level=info msg="CreateContainer within sandbox \"75b66a0d34816c56ba001e020a89426fb6f48eb7a9baf15a33ca29b92483b7c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff0eaf09b0025bdbe5ee9a24f38bc8b070304572127ac81e427dd4fede1e5cd3\"" May 8 00:26:52.311186 containerd[1543]: time="2025-05-08T00:26:52.311052571Z" level=info msg="StartContainer for \"ff0eaf09b0025bdbe5ee9a24f38bc8b070304572127ac81e427dd4fede1e5cd3\"" May 8 00:26:52.371342 containerd[1543]: time="2025-05-08T00:26:52.371292497Z" level=info msg="StartContainer for \"ff0eaf09b0025bdbe5ee9a24f38bc8b070304572127ac81e427dd4fede1e5cd3\" returns successfully" May 8 00:26:52.371515 containerd[1543]: time="2025-05-08T00:26:52.371307578Z" level=info msg="StartContainer for \"a60ee67631b1addb311a249d18deabcb98333461bc93398d26b3d4e32d363f16\" returns successfully" May 8 00:26:53.069524 kubelet[2717]: E0508 00:26:53.069467 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:53.071882 kubelet[2717]: E0508 00:26:53.071789 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:53.080476 kubelet[2717]: I0508 00:26:53.080245 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fpzx8" podStartSLOduration=16.080232871 podStartE2EDuration="16.080232871s" podCreationTimestamp="2025-05-08 00:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:53.079802926 +0000 UTC m=+32.205654891" watchObservedRunningTime="2025-05-08 00:26:53.080232871 +0000 UTC m=+32.206084836" May 8 00:26:53.091156 kubelet[2717]: I0508 00:26:53.091093 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5pd5g" podStartSLOduration=16.091075613 podStartE2EDuration="16.091075613s" podCreationTimestamp="2025-05-08 00:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:26:53.090308369 +0000 UTC m=+32.216160334" watchObservedRunningTime="2025-05-08 00:26:53.091075613 +0000 UTC m=+32.216927578" May 8 00:26:53.282223 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:41426.service - OpenSSH per-connection server daemon (10.0.0.1:41426). May 8 00:26:53.314968 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 41426 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:26:53.316242 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:26:53.319857 systemd-logind[1524]: New session 9 of user core. May 8 00:26:53.327282 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:26:53.436220 sshd[4134]: pam_unix(sshd:session): session closed for user core May 8 00:26:53.439555 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:41426.service: Deactivated successfully. May 8 00:26:53.441311 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:26:53.441330 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. May 8 00:26:53.446345 systemd-logind[1524]: Removed session 9. May 8 00:26:54.073145 kubelet[2717]: E0508 00:26:54.072938 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:54.073693 kubelet[2717]: E0508 00:26:54.073666 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:55.076070 kubelet[2717]: E0508 00:26:55.074838 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:55.076070 kubelet[2717]: E0508 00:26:55.074869 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:26:58.447205 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:41434.service - OpenSSH per-connection server daemon (10.0.0.1:41434). May 8 00:26:58.477827 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 41434 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:26:58.479020 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:26:58.483168 systemd-logind[1524]: New session 10 of user core. May 8 00:26:58.495281 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:26:58.603436 sshd[4150]: pam_unix(sshd:session): session closed for user core May 8 00:26:58.607168 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:41434.service: Deactivated successfully. May 8 00:26:58.609082 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. May 8 00:26:58.609094 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:26:58.610108 systemd-logind[1524]: Removed session 10. May 8 00:27:03.622242 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:52712.service - OpenSSH per-connection server daemon (10.0.0.1:52712). May 8 00:27:03.654727 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 52712 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:03.655952 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:03.659786 systemd-logind[1524]: New session 11 of user core. May 8 00:27:03.670255 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:27:03.784072 sshd[4167]: pam_unix(sshd:session): session closed for user core May 8 00:27:03.797212 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:52726.service - OpenSSH per-connection server daemon (10.0.0.1:52726). May 8 00:27:03.797578 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:52712.service: Deactivated successfully. May 8 00:27:03.800775 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. May 8 00:27:03.800853 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:27:03.802152 systemd-logind[1524]: Removed session 11. May 8 00:27:03.828452 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 52726 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:03.829639 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:03.833656 systemd-logind[1524]: New session 12 of user core. May 8 00:27:03.838323 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:27:03.982666 sshd[4181]: pam_unix(sshd:session): session closed for user core May 8 00:27:03.991247 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:52730.service - OpenSSH per-connection server daemon (10.0.0.1:52730). May 8 00:27:03.993235 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:52726.service: Deactivated successfully. May 8 00:27:03.997590 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:27:04.001969 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. May 8 00:27:04.010389 systemd-logind[1524]: Removed session 12. May 8 00:27:04.030773 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 52730 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:04.032073 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:04.035976 systemd-logind[1524]: New session 13 of user core. May 8 00:27:04.046269 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:27:04.158561 sshd[4194]: pam_unix(sshd:session): session closed for user core May 8 00:27:04.162057 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:52730.service: Deactivated successfully. May 8 00:27:04.165640 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:27:04.166344 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. May 8 00:27:04.167680 systemd-logind[1524]: Removed session 13. May 8 00:27:09.169196 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:52732.service - OpenSSH per-connection server daemon (10.0.0.1:52732). May 8 00:27:09.200577 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 52732 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:09.201936 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:09.205840 systemd-logind[1524]: New session 14 of user core. May 8 00:27:09.215285 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:27:09.322858 sshd[4216]: pam_unix(sshd:session): session closed for user core May 8 00:27:09.333217 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:52740.service - OpenSSH per-connection server daemon (10.0.0.1:52740). May 8 00:27:09.333585 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:52732.service: Deactivated successfully. May 8 00:27:09.336119 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:27:09.337111 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. May 8 00:27:09.338354 systemd-logind[1524]: Removed session 14. May 8 00:27:09.365392 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 52740 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:09.366762 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:09.371889 systemd-logind[1524]: New session 15 of user core. May 8 00:27:09.387219 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:27:09.615250 sshd[4229]: pam_unix(sshd:session): session closed for user core May 8 00:27:09.621235 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:52744.service - OpenSSH per-connection server daemon (10.0.0.1:52744). May 8 00:27:09.621604 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:52740.service: Deactivated successfully. May 8 00:27:09.624016 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:27:09.625292 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. May 8 00:27:09.627401 systemd-logind[1524]: Removed session 15. May 8 00:27:09.658224 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 52744 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:09.659486 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:09.664080 systemd-logind[1524]: New session 16 of user core. May 8 00:27:09.682402 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:27:10.967520 sshd[4242]: pam_unix(sshd:session): session closed for user core May 8 00:27:10.975256 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). May 8 00:27:10.978222 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:52744.service: Deactivated successfully. May 8 00:27:10.982744 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:27:10.984836 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. May 8 00:27:10.987626 systemd-logind[1524]: Removed session 16. May 8 00:27:11.018081 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:11.019561 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:11.024051 systemd-logind[1524]: New session 17 of user core. May 8 00:27:11.033255 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:27:11.260240 sshd[4264]: pam_unix(sshd:session): session closed for user core May 8 00:27:11.271290 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:52768.service - OpenSSH per-connection server daemon (10.0.0.1:52768). May 8 00:27:11.272464 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:52756.service: Deactivated successfully. May 8 00:27:11.274173 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:27:11.275418 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. May 8 00:27:11.276678 systemd-logind[1524]: Removed session 17. May 8 00:27:11.304366 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 52768 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:11.305592 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:11.309942 systemd-logind[1524]: New session 18 of user core. May 8 00:27:11.325725 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:27:11.431555 sshd[4279]: pam_unix(sshd:session): session closed for user core May 8 00:27:11.434715 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:52768.service: Deactivated successfully. May 8 00:27:11.437289 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:27:11.437555 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. May 8 00:27:11.438759 systemd-logind[1524]: Removed session 18. May 8 00:27:16.447230 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:49664.service - OpenSSH per-connection server daemon (10.0.0.1:49664). May 8 00:27:16.477747 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 49664 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:16.479089 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:16.482537 systemd-logind[1524]: New session 19 of user core. May 8 00:27:16.492300 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:27:16.595376 sshd[4300]: pam_unix(sshd:session): session closed for user core May 8 00:27:16.597953 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:49664.service: Deactivated successfully. May 8 00:27:16.600704 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. May 8 00:27:16.601033 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:27:16.603292 systemd-logind[1524]: Removed session 19. May 8 00:27:21.608224 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). May 8 00:27:21.643237 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:21.643666 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:21.648492 systemd-logind[1524]: New session 20 of user core. May 8 00:27:21.663286 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:27:21.774627 sshd[4317]: pam_unix(sshd:session): session closed for user core May 8 00:27:21.778268 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:49680.service: Deactivated successfully. May 8 00:27:21.780603 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:27:21.781391 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. May 8 00:27:21.782285 systemd-logind[1524]: Removed session 20. May 8 00:27:26.793253 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:41446.service - OpenSSH per-connection server daemon (10.0.0.1:41446). May 8 00:27:26.825976 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 41446 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:26.827129 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:26.832663 systemd-logind[1524]: New session 21 of user core. May 8 00:27:26.847246 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:27:26.968210 sshd[4333]: pam_unix(sshd:session): session closed for user core May 8 00:27:26.975254 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). May 8 00:27:26.975647 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:41446.service: Deactivated successfully. May 8 00:27:26.977085 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:27:26.981021 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. May 8 00:27:26.983369 systemd-logind[1524]: Removed session 21. May 8 00:27:27.006519 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:27.007795 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:27.012891 systemd-logind[1524]: New session 22 of user core. May 8 00:27:27.018241 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:27:28.578921 containerd[1543]: time="2025-05-08T00:27:28.578876821Z" level=info msg="StopContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" with timeout 30 (s)" May 8 00:27:28.580595 containerd[1543]: time="2025-05-08T00:27:28.579982672Z" level=info msg="Stop container \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" with signal terminated" May 8 00:27:28.616174 containerd[1543]: time="2025-05-08T00:27:28.616133410Z" level=info msg="StopContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" with timeout 2 (s)" May 8 00:27:28.616448 containerd[1543]: time="2025-05-08T00:27:28.616415243Z" level=info msg="Stop container \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" with signal terminated" May 8 00:27:28.616768 containerd[1543]: time="2025-05-08T00:27:28.616730954Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:27:28.624084 systemd-networkd[1232]: lxc_health: Link DOWN May 8 00:27:28.624146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda-rootfs.mount: Deactivated successfully. May 8 00:27:28.624188 systemd-networkd[1232]: lxc_health: Lost carrier May 8 00:27:28.637311 containerd[1543]: time="2025-05-08T00:27:28.637245740Z" level=info msg="shim disconnected" id=6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda namespace=k8s.io May 8 00:27:28.637311 containerd[1543]: time="2025-05-08T00:27:28.637306738Z" level=warning msg="cleaning up after shim disconnected" id=6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda namespace=k8s.io May 8 00:27:28.637311 containerd[1543]: time="2025-05-08T00:27:28.637316498Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:28.661360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1-rootfs.mount: Deactivated successfully. May 8 00:27:28.668454 containerd[1543]: time="2025-05-08T00:27:28.668110775Z" level=info msg="shim disconnected" id=b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1 namespace=k8s.io May 8 00:27:28.668454 containerd[1543]: time="2025-05-08T00:27:28.668170573Z" level=warning msg="cleaning up after shim disconnected" id=b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1 namespace=k8s.io May 8 00:27:28.668454 containerd[1543]: time="2025-05-08T00:27:28.668179213Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:28.676547 containerd[1543]: time="2025-05-08T00:27:28.676493956Z" level=info msg="StopContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" returns successfully" May 8 00:27:28.680957 containerd[1543]: time="2025-05-08T00:27:28.680912521Z" level=info msg="StopPodSandbox for \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\"" May 8 00:27:28.681061 containerd[1543]: time="2025-05-08T00:27:28.680968560Z" level=info msg="Container to stop \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.684642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f-shm.mount: Deactivated successfully. May 8 00:27:28.697116 containerd[1543]: time="2025-05-08T00:27:28.697074380Z" level=info msg="StopContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" returns successfully" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698109473Z" level=info msg="StopPodSandbox for \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\"" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698153672Z" level=info msg="Container to stop \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698166391Z" level=info msg="Container to stop \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698177511Z" level=info msg="Container to stop \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698186871Z" level=info msg="Container to stop \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.698306 containerd[1543]: time="2025-05-08T00:27:28.698197230Z" level=info msg="Container to stop \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:27:28.700156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c-shm.mount: Deactivated successfully. May 8 00:27:28.712513 containerd[1543]: time="2025-05-08T00:27:28.712455179Z" level=info msg="shim disconnected" id=3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f namespace=k8s.io May 8 00:27:28.712929 containerd[1543]: time="2025-05-08T00:27:28.712785010Z" level=warning msg="cleaning up after shim disconnected" id=3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f namespace=k8s.io May 8 00:27:28.712929 containerd[1543]: time="2025-05-08T00:27:28.712807209Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:28.732700 containerd[1543]: time="2025-05-08T00:27:28.732638052Z" level=info msg="shim disconnected" id=9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c namespace=k8s.io May 8 00:27:28.733170 containerd[1543]: time="2025-05-08T00:27:28.733033682Z" level=warning msg="cleaning up after shim disconnected" id=9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c namespace=k8s.io May 8 00:27:28.733170 containerd[1543]: time="2025-05-08T00:27:28.733053202Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:28.745822 containerd[1543]: time="2025-05-08T00:27:28.745750951Z" level=info msg="TearDown network for sandbox \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" successfully" May 8 00:27:28.745822 containerd[1543]: time="2025-05-08T00:27:28.745782710Z" level=info msg="StopPodSandbox for \"9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c\" returns successfully" May 8 00:27:28.754967 containerd[1543]: time="2025-05-08T00:27:28.754937391Z" level=info msg="TearDown network for sandbox \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\" successfully" May 8 00:27:28.754967 containerd[1543]: time="2025-05-08T00:27:28.754964750Z" level=info msg="StopPodSandbox for \"3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f\" returns successfully" May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855520 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cni-path\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855571 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-config-path\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855589 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-kernel\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855606 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-etc-cni-netd\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855632 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-hubble-tls\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.855837 kubelet[2717]: I0508 00:27:28.855646 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-net\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855668 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35fe04ae-d67f-4996-b89c-78a10e1c1691-clustermesh-secrets\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855683 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6twc9\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-kube-api-access-6twc9\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855698 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-xtables-lock\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855712 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-hostproc\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855727 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-cgroup\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856356 kubelet[2717]: I0508 00:27:28.855740 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-run\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856483 kubelet[2717]: I0508 00:27:28.855754 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-bpf-maps\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.856483 kubelet[2717]: I0508 00:27:28.855768 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-lib-modules\") pod \"35fe04ae-d67f-4996-b89c-78a10e1c1691\" (UID: \"35fe04ae-d67f-4996-b89c-78a10e1c1691\") " May 8 00:27:28.859704 kubelet[2717]: I0508 00:27:28.859487 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859704 kubelet[2717]: I0508 00:27:28.859487 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cni-path" (OuterVolumeSpecName: "cni-path") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859704 kubelet[2717]: I0508 00:27:28.859558 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859704 kubelet[2717]: I0508 00:27:28.859574 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859704 kubelet[2717]: I0508 00:27:28.859587 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859867 kubelet[2717]: I0508 00:27:28.859600 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859867 kubelet[2717]: I0508 00:27:28.859640 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859867 kubelet[2717]: I0508 00:27:28.859651 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.859867 kubelet[2717]: I0508 00:27:28.859669 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.861719 kubelet[2717]: I0508 00:27:28.861681 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-hostproc" (OuterVolumeSpecName: "hostproc") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:27:28.866681 kubelet[2717]: I0508 00:27:28.866562 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:27:28.867179 kubelet[2717]: I0508 00:27:28.867153 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-kube-api-access-6twc9" (OuterVolumeSpecName: "kube-api-access-6twc9") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "kube-api-access-6twc9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:27:28.867241 kubelet[2717]: I0508 00:27:28.867208 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35fe04ae-d67f-4996-b89c-78a10e1c1691-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:27:28.867460 kubelet[2717]: I0508 00:27:28.867427 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "35fe04ae-d67f-4996-b89c-78a10e1c1691" (UID: "35fe04ae-d67f-4996-b89c-78a10e1c1691"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956857 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d72e40fb-e48e-47d9-91f3-3f01f9103004-cilium-config-path\") pod \"d72e40fb-e48e-47d9-91f3-3f01f9103004\" (UID: \"d72e40fb-e48e-47d9-91f3-3f01f9103004\") " May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956896 2717 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8r9t\" (UniqueName: \"kubernetes.io/projected/d72e40fb-e48e-47d9-91f3-3f01f9103004-kube-api-access-s8r9t\") pod \"d72e40fb-e48e-47d9-91f3-3f01f9103004\" (UID: \"d72e40fb-e48e-47d9-91f3-3f01f9103004\") " May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956929 2717 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956940 2717 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956952 2717 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956960 2717 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956967 2717 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957041 kubelet[2717]: I0508 00:27:28.956975 2717 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35fe04ae-d67f-4996-b89c-78a10e1c1691-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.956982 2717 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957013 2717 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957022 2717 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957029 2717 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957036 2717 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957043 2717 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35fe04ae-d67f-4996-b89c-78a10e1c1691-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957051 2717 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35fe04ae-d67f-4996-b89c-78a10e1c1691-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.957294 kubelet[2717]: I0508 00:27:28.957060 2717 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6twc9\" (UniqueName: \"kubernetes.io/projected/35fe04ae-d67f-4996-b89c-78a10e1c1691-kube-api-access-6twc9\") on node \"localhost\" DevicePath \"\"" May 8 00:27:28.958867 kubelet[2717]: I0508 00:27:28.958837 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72e40fb-e48e-47d9-91f3-3f01f9103004-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d72e40fb-e48e-47d9-91f3-3f01f9103004" (UID: "d72e40fb-e48e-47d9-91f3-3f01f9103004"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:27:28.959285 kubelet[2717]: I0508 00:27:28.959249 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72e40fb-e48e-47d9-91f3-3f01f9103004-kube-api-access-s8r9t" (OuterVolumeSpecName: "kube-api-access-s8r9t") pod "d72e40fb-e48e-47d9-91f3-3f01f9103004" (UID: "d72e40fb-e48e-47d9-91f3-3f01f9103004"). InnerVolumeSpecName "kube-api-access-s8r9t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:27:29.057277 kubelet[2717]: I0508 00:27:29.057219 2717 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d72e40fb-e48e-47d9-91f3-3f01f9103004-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:27:29.057277 kubelet[2717]: I0508 00:27:29.057254 2717 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s8r9t\" (UniqueName: \"kubernetes.io/projected/d72e40fb-e48e-47d9-91f3-3f01f9103004-kube-api-access-s8r9t\") on node \"localhost\" DevicePath \"\"" May 8 00:27:29.151749 kubelet[2717]: I0508 00:27:29.151626 2717 scope.go:117] "RemoveContainer" containerID="6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda" May 8 00:27:29.154773 containerd[1543]: time="2025-05-08T00:27:29.153642828Z" level=info msg="RemoveContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\"" May 8 00:27:29.156183 containerd[1543]: time="2025-05-08T00:27:29.156153926Z" level=info msg="RemoveContainer for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" returns successfully" May 8 00:27:29.157385 kubelet[2717]: I0508 00:27:29.156380 2717 scope.go:117] "RemoveContainer" containerID="6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda" May 8 00:27:29.157463 containerd[1543]: time="2025-05-08T00:27:29.157294538Z" level=error msg="ContainerStatus for \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\": not found" May 8 00:27:29.163093 kubelet[2717]: E0508 00:27:29.163043 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\": not found" containerID="6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda" May 8 00:27:29.163177 kubelet[2717]: I0508 00:27:29.163099 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda"} err="failed to get container status \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\": rpc error: code = NotFound desc = an error occurred when try to find container \"6daac8f0e73c210222f729b1076105b0f3ff260d1fe524fafa33106b90992eda\": not found" May 8 00:27:29.163205 kubelet[2717]: I0508 00:27:29.163182 2717 scope.go:117] "RemoveContainer" containerID="b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1" May 8 00:27:29.165125 containerd[1543]: time="2025-05-08T00:27:29.165096187Z" level=info msg="RemoveContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\"" May 8 00:27:29.167821 containerd[1543]: time="2025-05-08T00:27:29.167788080Z" level=info msg="RemoveContainer for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" returns successfully" May 8 00:27:29.169093 kubelet[2717]: I0508 00:27:29.169060 2717 scope.go:117] "RemoveContainer" containerID="ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c" May 8 00:27:29.171959 containerd[1543]: time="2025-05-08T00:27:29.171139638Z" level=info msg="RemoveContainer for \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\"" May 8 00:27:29.173507 containerd[1543]: time="2025-05-08T00:27:29.173483141Z" level=info msg="RemoveContainer for \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\" returns successfully" May 8 00:27:29.173719 kubelet[2717]: I0508 00:27:29.173696 2717 scope.go:117] "RemoveContainer" containerID="4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a" May 8 00:27:29.174592 containerd[1543]: time="2025-05-08T00:27:29.174568994Z" level=info msg="RemoveContainer for \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\"" May 8 00:27:29.176861 containerd[1543]: time="2025-05-08T00:27:29.176828858Z" level=info msg="RemoveContainer for \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\" returns successfully" May 8 00:27:29.177462 kubelet[2717]: I0508 00:27:29.177088 2717 scope.go:117] "RemoveContainer" containerID="0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed" May 8 00:27:29.178934 containerd[1543]: time="2025-05-08T00:27:29.178904327Z" level=info msg="RemoveContainer for \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\"" May 8 00:27:29.182209 containerd[1543]: time="2025-05-08T00:27:29.182118569Z" level=info msg="RemoveContainer for \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\" returns successfully" May 8 00:27:29.182499 kubelet[2717]: I0508 00:27:29.182374 2717 scope.go:117] "RemoveContainer" containerID="c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e" May 8 00:27:29.183312 containerd[1543]: time="2025-05-08T00:27:29.183287340Z" level=info msg="RemoveContainer for \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\"" May 8 00:27:29.185386 containerd[1543]: time="2025-05-08T00:27:29.185300090Z" level=info msg="RemoveContainer for \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\" returns successfully" May 8 00:27:29.185454 kubelet[2717]: I0508 00:27:29.185441 2717 scope.go:117] "RemoveContainer" containerID="b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1" May 8 00:27:29.185632 containerd[1543]: time="2025-05-08T00:27:29.185601123Z" level=error msg="ContainerStatus for \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\": not found" May 8 00:27:29.185727 kubelet[2717]: E0508 00:27:29.185708 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\": not found" containerID="b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1" May 8 00:27:29.185789 kubelet[2717]: I0508 00:27:29.185757 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1"} err="failed to get container status \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b134641c00d6858ff2a2d6ca960656822e77ad9800d92834447f8e2d3174fef1\": not found" May 8 00:27:29.185820 kubelet[2717]: I0508 00:27:29.185790 2717 scope.go:117] "RemoveContainer" containerID="ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c" May 8 00:27:29.185978 containerd[1543]: time="2025-05-08T00:27:29.185953674Z" level=error msg="ContainerStatus for \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\": not found" May 8 00:27:29.186141 kubelet[2717]: E0508 00:27:29.186124 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\": not found" containerID="ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c" May 8 00:27:29.186187 kubelet[2717]: I0508 00:27:29.186142 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c"} err="failed to get container status \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce73282e1345f1f9d31efb382f4f1bc64d3a5eb14fd1931da401b804a2698f5c\": not found" May 8 00:27:29.186187 kubelet[2717]: I0508 00:27:29.186154 2717 scope.go:117] "RemoveContainer" containerID="4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a" May 8 00:27:29.186335 containerd[1543]: time="2025-05-08T00:27:29.186282466Z" level=error msg="ContainerStatus for \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\": not found" May 8 00:27:29.186422 kubelet[2717]: E0508 00:27:29.186353 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\": not found" containerID="4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a" May 8 00:27:29.186422 kubelet[2717]: I0508 00:27:29.186369 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a"} err="failed to get container status \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f4d6118ae0f42f91de00c8d5de76407f3035d7bd644e0a00d0652bb5b22fd1a\": not found" May 8 00:27:29.186422 kubelet[2717]: I0508 00:27:29.186380 2717 scope.go:117] "RemoveContainer" containerID="0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed" May 8 00:27:29.186884 containerd[1543]: time="2025-05-08T00:27:29.186632658Z" level=error msg="ContainerStatus for \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\": not found" May 8 00:27:29.186938 kubelet[2717]: E0508 00:27:29.186768 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\": not found" containerID="0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed" May 8 00:27:29.186938 kubelet[2717]: I0508 00:27:29.186798 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed"} err="failed to get container status \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ef460b63260dba5e8ae9f22b1f6747fd38e734d981fad877e8302b4f51280ed\": not found" May 8 00:27:29.186938 kubelet[2717]: I0508 00:27:29.186816 2717 scope.go:117] "RemoveContainer" containerID="c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e" May 8 00:27:29.187035 containerd[1543]: time="2025-05-08T00:27:29.186952170Z" level=error msg="ContainerStatus for \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\": not found" May 8 00:27:29.187068 kubelet[2717]: E0508 00:27:29.187043 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\": not found" containerID="c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e" May 8 00:27:29.187068 kubelet[2717]: I0508 00:27:29.187061 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e"} err="failed to get container status \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c47d8f3024ad48e04dd13a2c90e8b4aeb96343b43098e03424117fc744b6777e\": not found" May 8 00:27:29.597638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d172fd132cb5962ebedf60f87459eab626d156c9f7e4199e154583ca5c8945f-rootfs.mount: Deactivated successfully. May 8 00:27:29.597786 systemd[1]: var-lib-kubelet-pods-d72e40fb\x2de48e\x2d47d9\x2d91f3\x2d3f01f9103004-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8r9t.mount: Deactivated successfully. May 8 00:27:29.597875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c2aa04892051db1ea7a0d9b7806d65b5ce7347b9856f6c6e02054389e51936c-rootfs.mount: Deactivated successfully. May 8 00:27:29.597951 systemd[1]: var-lib-kubelet-pods-35fe04ae\x2dd67f\x2d4996\x2db89c\x2d78a10e1c1691-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6twc9.mount: Deactivated successfully. May 8 00:27:29.598046 systemd[1]: var-lib-kubelet-pods-35fe04ae\x2dd67f\x2d4996\x2db89c\x2d78a10e1c1691-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:27:29.598137 systemd[1]: var-lib-kubelet-pods-35fe04ae\x2dd67f\x2d4996\x2db89c\x2d78a10e1c1691-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:27:29.970130 kubelet[2717]: E0508 00:27:29.969658 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:30.536004 sshd[4345]: pam_unix(sshd:session): session closed for user core May 8 00:27:30.549577 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:41452.service - OpenSSH per-connection server daemon (10.0.0.1:41452). May 8 00:27:30.550054 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:41450.service: Deactivated successfully. May 8 00:27:30.553072 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:27:30.555068 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. May 8 00:27:30.556349 systemd-logind[1524]: Removed session 22. May 8 00:27:30.580183 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 41452 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:30.581508 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:30.586830 systemd-logind[1524]: New session 23 of user core. May 8 00:27:30.593251 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:27:30.973305 kubelet[2717]: I0508 00:27:30.972435 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" path="/var/lib/kubelet/pods/35fe04ae-d67f-4996-b89c-78a10e1c1691/volumes" May 8 00:27:30.973305 kubelet[2717]: I0508 00:27:30.972948 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72e40fb-e48e-47d9-91f3-3f01f9103004" path="/var/lib/kubelet/pods/d72e40fb-e48e-47d9-91f3-3f01f9103004/volumes" May 8 00:27:31.015539 kubelet[2717]: E0508 00:27:31.015482 2717 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:27:31.201825 sshd[4510]: pam_unix(sshd:session): session closed for user core May 8 00:27:31.209241 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:41468.service - OpenSSH per-connection server daemon (10.0.0.1:41468). May 8 00:27:31.221362 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:41452.service: Deactivated successfully. May 8 00:27:31.228796 kubelet[2717]: I0508 00:27:31.228700 2717 topology_manager.go:215] "Topology Admit Handler" podUID="b832315b-46d6-4143-b408-18687497d69b" podNamespace="kube-system" podName="cilium-prtz4" May 8 00:27:31.229197 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:27:31.230978 kubelet[2717]: E0508 00:27:31.230953 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="mount-cgroup" May 8 00:27:31.231083 kubelet[2717]: E0508 00:27:31.231072 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="mount-bpf-fs" May 8 00:27:31.231168 kubelet[2717]: E0508 00:27:31.231155 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="clean-cilium-state" May 8 00:27:31.232016 kubelet[2717]: E0508 00:27:31.231306 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="cilium-agent" May 8 00:27:31.232016 kubelet[2717]: E0508 00:27:31.231321 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d72e40fb-e48e-47d9-91f3-3f01f9103004" containerName="cilium-operator" May 8 00:27:31.232016 kubelet[2717]: E0508 00:27:31.231327 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="apply-sysctl-overwrites" May 8 00:27:31.232016 kubelet[2717]: I0508 00:27:31.231354 2717 memory_manager.go:354] "RemoveStaleState removing state" podUID="35fe04ae-d67f-4996-b89c-78a10e1c1691" containerName="cilium-agent" May 8 00:27:31.232016 kubelet[2717]: I0508 00:27:31.231362 2717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72e40fb-e48e-47d9-91f3-3f01f9103004" containerName="cilium-operator" May 8 00:27:31.233154 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. May 8 00:27:31.245835 systemd-logind[1524]: Removed session 23. May 8 00:27:31.274574 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 41468 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:31.275943 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:31.279675 systemd-logind[1524]: New session 24 of user core. May 8 00:27:31.294306 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:27:31.344442 sshd[4525]: pam_unix(sshd:session): session closed for user core May 8 00:27:31.359223 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:41480.service - OpenSSH per-connection server daemon (10.0.0.1:41480). May 8 00:27:31.359611 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:41468.service: Deactivated successfully. May 8 00:27:31.362219 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:27:31.362402 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. May 8 00:27:31.365229 systemd-logind[1524]: Removed session 24. May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371045 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-bpf-maps\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371086 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-etc-cni-netd\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371105 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-cilium-cgroup\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371119 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-xtables-lock\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371135 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b832315b-46d6-4143-b408-18687497d69b-clustermesh-secrets\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371403 kubelet[2717]: I0508 00:27:31.371154 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b832315b-46d6-4143-b408-18687497d69b-cilium-ipsec-secrets\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371172 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnkhm\" (UniqueName: \"kubernetes.io/projected/b832315b-46d6-4143-b408-18687497d69b-kube-api-access-pnkhm\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371189 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-host-proc-sys-net\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371210 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-hostproc\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371233 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-lib-modules\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371250 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-cni-path\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371587 kubelet[2717]: I0508 00:27:31.371265 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-cilium-run\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371706 kubelet[2717]: I0508 00:27:31.371281 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b832315b-46d6-4143-b408-18687497d69b-cilium-config-path\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371706 kubelet[2717]: I0508 00:27:31.371295 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b832315b-46d6-4143-b408-18687497d69b-host-proc-sys-kernel\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.371706 kubelet[2717]: I0508 00:27:31.371309 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b832315b-46d6-4143-b408-18687497d69b-hubble-tls\") pod \"cilium-prtz4\" (UID: \"b832315b-46d6-4143-b408-18687497d69b\") " pod="kube-system/cilium-prtz4" May 8 00:27:31.391455 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 41480 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:27:31.392684 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:27:31.396748 systemd-logind[1524]: New session 25 of user core. May 8 00:27:31.405202 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:27:31.540737 kubelet[2717]: E0508 00:27:31.540688 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:31.541530 containerd[1543]: time="2025-05-08T00:27:31.541480351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prtz4,Uid:b832315b-46d6-4143-b408-18687497d69b,Namespace:kube-system,Attempt:0,}" May 8 00:27:31.560352 containerd[1543]: time="2025-05-08T00:27:31.560280264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:27:31.560352 containerd[1543]: time="2025-05-08T00:27:31.560326863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:27:31.560352 containerd[1543]: time="2025-05-08T00:27:31.560337183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:27:31.560489 containerd[1543]: time="2025-05-08T00:27:31.560412381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:27:31.598066 containerd[1543]: time="2025-05-08T00:27:31.598028326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prtz4,Uid:b832315b-46d6-4143-b408-18687497d69b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\"" May 8 00:27:31.598644 kubelet[2717]: E0508 00:27:31.598622 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:31.601295 containerd[1543]: time="2025-05-08T00:27:31.601263616Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:27:31.609443 containerd[1543]: time="2025-05-08T00:27:31.609349121Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3e9f0ef7f264cdab10847e638be909e9a92e2119625598756661efcca1880fb\"" May 8 00:27:31.610003 containerd[1543]: time="2025-05-08T00:27:31.609956948Z" level=info msg="StartContainer for \"a3e9f0ef7f264cdab10847e638be909e9a92e2119625598756661efcca1880fb\"" May 8 00:27:31.651058 containerd[1543]: time="2025-05-08T00:27:31.651005459Z" level=info msg="StartContainer for \"a3e9f0ef7f264cdab10847e638be909e9a92e2119625598756661efcca1880fb\" returns successfully" May 8 00:27:31.683827 containerd[1543]: time="2025-05-08T00:27:31.683765509Z" level=info msg="shim disconnected" id=a3e9f0ef7f264cdab10847e638be909e9a92e2119625598756661efcca1880fb namespace=k8s.io May 8 00:27:31.683827 containerd[1543]: time="2025-05-08T00:27:31.683818148Z" level=warning msg="cleaning up after shim disconnected" id=a3e9f0ef7f264cdab10847e638be909e9a92e2119625598756661efcca1880fb namespace=k8s.io May 8 00:27:31.683827 containerd[1543]: time="2025-05-08T00:27:31.683826188Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:32.159373 kubelet[2717]: E0508 00:27:32.159334 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:32.162252 containerd[1543]: time="2025-05-08T00:27:32.162213605Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:27:32.172432 containerd[1543]: time="2025-05-08T00:27:32.172307840Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4bd8d6c72146f94d713fda1aa19b8546db9ce3bda8ba3ce6966dacc11fcb58e0\"" May 8 00:27:32.172973 containerd[1543]: time="2025-05-08T00:27:32.172941587Z" level=info msg="StartContainer for \"4bd8d6c72146f94d713fda1aa19b8546db9ce3bda8ba3ce6966dacc11fcb58e0\"" May 8 00:27:32.215285 containerd[1543]: time="2025-05-08T00:27:32.215245249Z" level=info msg="StartContainer for \"4bd8d6c72146f94d713fda1aa19b8546db9ce3bda8ba3ce6966dacc11fcb58e0\" returns successfully" May 8 00:27:32.244248 containerd[1543]: time="2025-05-08T00:27:32.244095744Z" level=info msg="shim disconnected" id=4bd8d6c72146f94d713fda1aa19b8546db9ce3bda8ba3ce6966dacc11fcb58e0 namespace=k8s.io May 8 00:27:32.244248 containerd[1543]: time="2025-05-08T00:27:32.244144543Z" level=warning msg="cleaning up after shim disconnected" id=4bd8d6c72146f94d713fda1aa19b8546db9ce3bda8ba3ce6966dacc11fcb58e0 namespace=k8s.io May 8 00:27:32.244248 containerd[1543]: time="2025-05-08T00:27:32.244152423Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:32.254977 containerd[1543]: time="2025-05-08T00:27:32.253963144Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:27:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:27:32.953053 kubelet[2717]: I0508 00:27:32.953007 2717 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:27:32Z","lastTransitionTime":"2025-05-08T00:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:27:33.162616 kubelet[2717]: E0508 00:27:33.162567 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:33.164444 containerd[1543]: time="2025-05-08T00:27:33.164328260Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:27:33.178672 containerd[1543]: time="2025-05-08T00:27:33.178620470Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d\"" May 8 00:27:33.182024 containerd[1543]: time="2025-05-08T00:27:33.180678111Z" level=info msg="StartContainer for \"74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d\"" May 8 00:27:33.228113 containerd[1543]: time="2025-05-08T00:27:33.227681340Z" level=info msg="StartContainer for \"74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d\" returns successfully" May 8 00:27:33.247642 containerd[1543]: time="2025-05-08T00:27:33.247586443Z" level=info msg="shim disconnected" id=74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d namespace=k8s.io May 8 00:27:33.247642 containerd[1543]: time="2025-05-08T00:27:33.247634722Z" level=warning msg="cleaning up after shim disconnected" id=74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d namespace=k8s.io May 8 00:27:33.247642 containerd[1543]: time="2025-05-08T00:27:33.247643362Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:33.475932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74cbcaa80b7b32918b5e6da1746428bfab3957d2f17b2a1260a8be9369105c7d-rootfs.mount: Deactivated successfully. May 8 00:27:34.166687 kubelet[2717]: E0508 00:27:34.166513 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:34.177427 containerd[1543]: time="2025-05-08T00:27:34.177370175Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:27:34.189852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410711666.mount: Deactivated successfully. May 8 00:27:34.190648 containerd[1543]: time="2025-05-08T00:27:34.190607862Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812\"" May 8 00:27:34.191694 containerd[1543]: time="2025-05-08T00:27:34.191074453Z" level=info msg="StartContainer for \"f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812\"" May 8 00:27:34.240920 containerd[1543]: time="2025-05-08T00:27:34.240865095Z" level=info msg="StartContainer for \"f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812\" returns successfully" May 8 00:27:34.256865 containerd[1543]: time="2025-05-08T00:27:34.256818733Z" level=info msg="shim disconnected" id=f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812 namespace=k8s.io May 8 00:27:34.256865 containerd[1543]: time="2025-05-08T00:27:34.256862892Z" level=warning msg="cleaning up after shim disconnected" id=f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812 namespace=k8s.io May 8 00:27:34.256865 containerd[1543]: time="2025-05-08T00:27:34.256873092Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:27:34.476069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94989f25174ff8f5d09972442acf22c738f30b3c5ea699fafef0f4b0e8ed812-rootfs.mount: Deactivated successfully. May 8 00:27:34.969485 kubelet[2717]: E0508 00:27:34.969449 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:35.170993 kubelet[2717]: E0508 00:27:35.170949 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:35.173841 containerd[1543]: time="2025-05-08T00:27:35.173802964Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:27:35.184530 containerd[1543]: time="2025-05-08T00:27:35.184481469Z" level=info msg="CreateContainer within sandbox \"cf6f0b8df84a2dce7d57e586d87218fb41d960c04ca9b207aa949ee9daa8ce5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d5a5b986c06507bada6067d2b00e8e095f789c076d105e912692e57f4541522\"" May 8 00:27:35.185253 containerd[1543]: time="2025-05-08T00:27:35.185194618Z" level=info msg="StartContainer for \"1d5a5b986c06507bada6067d2b00e8e095f789c076d105e912692e57f4541522\"" May 8 00:27:35.242607 containerd[1543]: time="2025-05-08T00:27:35.242389440Z" level=info msg="StartContainer for \"1d5a5b986c06507bada6067d2b00e8e095f789c076d105e912692e57f4541522\" returns successfully" May 8 00:27:35.496013 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 00:27:36.175482 kubelet[2717]: E0508 00:27:36.175440 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:36.190872 kubelet[2717]: I0508 00:27:36.190752 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-prtz4" podStartSLOduration=5.190737404 podStartE2EDuration="5.190737404s" podCreationTimestamp="2025-05-08 00:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:27:36.190113533 +0000 UTC m=+75.315965498" watchObservedRunningTime="2025-05-08 00:27:36.190737404 +0000 UTC m=+75.316589369" May 8 00:27:37.542587 kubelet[2717]: E0508 00:27:37.542497 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:37.775362 systemd[1]: run-containerd-runc-k8s.io-1d5a5b986c06507bada6067d2b00e8e095f789c076d105e912692e57f4541522-runc.NFLb4B.mount: Deactivated successfully. May 8 00:27:38.288805 systemd-networkd[1232]: lxc_health: Link UP May 8 00:27:38.298118 systemd-networkd[1232]: lxc_health: Gained carrier May 8 00:27:39.546303 kubelet[2717]: E0508 00:27:39.545583 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:39.639162 systemd-networkd[1232]: lxc_health: Gained IPv6LL May 8 00:27:40.181660 kubelet[2717]: E0508 00:27:40.181450 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:27:46.276371 sshd[4534]: pam_unix(sshd:session): session closed for user core May 8 00:27:46.280108 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:41480.service: Deactivated successfully. May 8 00:27:46.282888 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:27:46.283074 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. May 8 00:27:46.285239 systemd-logind[1524]: Removed session 25.