May 14 04:52:56.786548 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 04:52:56.786568 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 03:42:50 -00 2025 May 14 04:52:56.786577 kernel: KASLR enabled May 14 04:52:56.786583 kernel: efi: EFI v2.7 by EDK II May 14 04:52:56.786589 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 14 04:52:56.786594 kernel: random: crng init done May 14 04:52:56.786600 kernel: secureboot: Secure boot disabled May 14 04:52:56.786606 kernel: ACPI: Early table checksum verification disabled May 14 04:52:56.786612 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 14 04:52:56.786619 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 04:52:56.786625 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786631 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786636 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786642 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786649 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786656 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786662 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786668 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786674 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:52:56.786680 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 04:52:56.786686 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 04:52:56.786692 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:52:56.786698 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 14 04:52:56.786704 kernel: Zone ranges: May 14 04:52:56.786709 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:52:56.786716 kernel: DMA32 empty May 14 04:52:56.786722 kernel: Normal empty May 14 04:52:56.786728 kernel: Device empty May 14 04:52:56.786734 kernel: Movable zone start for each node May 14 04:52:56.786740 kernel: Early memory node ranges May 14 04:52:56.786746 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 14 04:52:56.786752 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 14 04:52:56.786758 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 14 04:52:56.786764 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 14 04:52:56.786770 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 14 04:52:56.786776 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 14 04:52:56.786782 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 14 04:52:56.786789 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 14 04:52:56.786795 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 14 04:52:56.786801 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 04:52:56.786810 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 04:52:56.786816 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 04:52:56.786823 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 04:52:56.787206 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:52:56.787218 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 04:52:56.787225 kernel: psci: probing for conduit method from ACPI. May 14 04:52:56.787231 kernel: psci: PSCIv1.1 detected in firmware. May 14 04:52:56.787237 kernel: psci: Using standard PSCI v0.2 function IDs May 14 04:52:56.787243 kernel: psci: Trusted OS migration not required May 14 04:52:56.787250 kernel: psci: SMC Calling Convention v1.1 May 14 04:52:56.787257 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 04:52:56.787263 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 04:52:56.787270 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 04:52:56.787281 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 04:52:56.787288 kernel: Detected PIPT I-cache on CPU0 May 14 04:52:56.787294 kernel: CPU features: detected: GIC system register CPU interface May 14 04:52:56.787301 kernel: CPU features: detected: Spectre-v4 May 14 04:52:56.787307 kernel: CPU features: detected: Spectre-BHB May 14 04:52:56.787314 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 04:52:56.787320 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 04:52:56.787326 kernel: CPU features: detected: ARM erratum 1418040 May 14 04:52:56.787371 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 04:52:56.787378 kernel: alternatives: applying boot alternatives May 14 04:52:56.787386 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 04:52:56.787395 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 04:52:56.787402 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 04:52:56.787408 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 04:52:56.787415 kernel: Fallback order for Node 0: 0 May 14 04:52:56.787421 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 04:52:56.787427 kernel: Policy zone: DMA May 14 04:52:56.787434 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 04:52:56.787441 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 04:52:56.787447 kernel: software IO TLB: area num 4. May 14 04:52:56.787454 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 04:52:56.787460 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 14 04:52:56.787467 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 04:52:56.787474 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 04:52:56.787481 kernel: rcu: RCU event tracing is enabled. May 14 04:52:56.787488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 04:52:56.787495 kernel: Trampoline variant of Tasks RCU enabled. May 14 04:52:56.787501 kernel: Tracing variant of Tasks RCU enabled. May 14 04:52:56.787508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 04:52:56.787514 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 04:52:56.787521 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 04:52:56.787527 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 04:52:56.787534 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 04:52:56.787540 kernel: GICv3: 256 SPIs implemented May 14 04:52:56.787548 kernel: GICv3: 0 Extended SPIs implemented May 14 04:52:56.787554 kernel: Root IRQ handler: gic_handle_irq May 14 04:52:56.787561 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 04:52:56.787567 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 04:52:56.787574 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 04:52:56.787580 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 04:52:56.787587 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 04:52:56.787594 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 04:52:56.787600 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 04:52:56.787607 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 04:52:56.787613 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 04:52:56.787620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:52:56.787627 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 04:52:56.787634 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 04:52:56.787641 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 04:52:56.787647 kernel: arm-pv: using stolen time PV May 14 04:52:56.787654 kernel: Console: colour dummy device 80x25 May 14 04:52:56.787660 kernel: ACPI: Core revision 20240827 May 14 04:52:56.787667 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 04:52:56.787674 kernel: pid_max: default: 32768 minimum: 301 May 14 04:52:56.787681 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 04:52:56.787689 kernel: landlock: Up and running. May 14 04:52:56.787695 kernel: SELinux: Initializing. May 14 04:52:56.787702 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 04:52:56.787709 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 04:52:56.787715 kernel: rcu: Hierarchical SRCU implementation. May 14 04:52:56.787722 kernel: rcu: Max phase no-delay instances is 400. May 14 04:52:56.787729 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 04:52:56.787736 kernel: Remapping and enabling EFI services. May 14 04:52:56.787743 kernel: smp: Bringing up secondary CPUs ... May 14 04:52:56.787750 kernel: Detected PIPT I-cache on CPU1 May 14 04:52:56.787762 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 04:52:56.787769 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 04:52:56.787777 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:52:56.787784 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 04:52:56.787791 kernel: Detected PIPT I-cache on CPU2 May 14 04:52:56.787798 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 04:52:56.787805 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 04:52:56.787814 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:52:56.787821 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 04:52:56.787828 kernel: Detected PIPT I-cache on CPU3 May 14 04:52:56.787841 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 04:52:56.787848 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 04:52:56.787855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:52:56.787862 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 04:52:56.787869 kernel: smp: Brought up 1 node, 4 CPUs May 14 04:52:56.787876 kernel: SMP: Total of 4 processors activated. May 14 04:52:56.787883 kernel: CPU: All CPU(s) started at EL1 May 14 04:52:56.787893 kernel: CPU features: detected: 32-bit EL0 Support May 14 04:52:56.787900 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 04:52:56.787907 kernel: CPU features: detected: Common not Private translations May 14 04:52:56.787914 kernel: CPU features: detected: CRC32 instructions May 14 04:52:56.787921 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 04:52:56.787928 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 04:52:56.787935 kernel: CPU features: detected: LSE atomic instructions May 14 04:52:56.787942 kernel: CPU features: detected: Privileged Access Never May 14 04:52:56.787949 kernel: CPU features: detected: RAS Extension Support May 14 04:52:56.787957 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 04:52:56.787964 kernel: alternatives: applying system-wide alternatives May 14 04:52:56.787971 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 04:52:56.787978 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 14 04:52:56.788040 kernel: devtmpfs: initialized May 14 04:52:56.788047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 04:52:56.788054 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 04:52:56.788061 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 04:52:56.788068 kernel: 0 pages in range for non-PLT usage May 14 04:52:56.788077 kernel: 508544 pages in range for PLT usage May 14 04:52:56.788084 kernel: pinctrl core: initialized pinctrl subsystem May 14 04:52:56.788091 kernel: SMBIOS 3.0.0 present. May 14 04:52:56.788098 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 04:52:56.788105 kernel: DMI: Memory slots populated: 1/1 May 14 04:52:56.788112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 04:52:56.788152 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 04:52:56.788191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 04:52:56.788200 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 04:52:56.788209 kernel: audit: initializing netlink subsys (disabled) May 14 04:52:56.788216 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 14 04:52:56.788223 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 04:52:56.788230 kernel: cpuidle: using governor menu May 14 04:52:56.788237 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 04:52:56.788244 kernel: ASID allocator initialised with 32768 entries May 14 04:52:56.788250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 04:52:56.788257 kernel: Serial: AMBA PL011 UART driver May 14 04:52:56.788264 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 04:52:56.788272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 04:52:56.788279 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 04:52:56.788286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 04:52:56.788293 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 04:52:56.788300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 04:52:56.788307 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 04:52:56.788314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 04:52:56.788321 kernel: ACPI: Added _OSI(Module Device) May 14 04:52:56.788328 kernel: ACPI: Added _OSI(Processor Device) May 14 04:52:56.788371 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 04:52:56.788378 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 04:52:56.788385 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 04:52:56.788392 kernel: ACPI: Interpreter enabled May 14 04:52:56.788399 kernel: ACPI: Using GIC for interrupt routing May 14 04:52:56.788406 kernel: ACPI: MCFG table detected, 1 entries May 14 04:52:56.788412 kernel: ACPI: CPU0 has been hot-added May 14 04:52:56.788419 kernel: ACPI: CPU1 has been hot-added May 14 04:52:56.788426 kernel: ACPI: CPU2 has been hot-added May 14 04:52:56.788433 kernel: ACPI: CPU3 has been hot-added May 14 04:52:56.788441 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 04:52:56.788448 kernel: printk: legacy console [ttyAMA0] enabled May 14 04:52:56.788455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 04:52:56.788583 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 04:52:56.788646 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 04:52:56.788703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 04:52:56.788759 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 04:52:56.788818 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 04:52:56.788827 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 04:52:56.788844 kernel: PCI host bridge to bus 0000:00 May 14 04:52:56.788916 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 04:52:56.789390 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 04:52:56.789471 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 04:52:56.789534 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 04:52:56.789642 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 04:52:56.789714 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 04:52:56.789798 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 04:52:56.789872 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 04:52:56.789935 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 04:52:56.789997 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 04:52:56.790058 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 04:52:56.790122 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 04:52:56.790191 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 04:52:56.790256 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 04:52:56.790312 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 04:52:56.790321 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 04:52:56.790328 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 04:52:56.790372 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 04:52:56.790382 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 04:52:56.790390 kernel: iommu: Default domain type: Translated May 14 04:52:56.790397 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 04:52:56.790404 kernel: efivars: Registered efivars operations May 14 04:52:56.790411 kernel: vgaarb: loaded May 14 04:52:56.790418 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 04:52:56.790425 kernel: VFS: Disk quotas dquot_6.6.0 May 14 04:52:56.790432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 04:52:56.790440 kernel: pnp: PnP ACPI init May 14 04:52:56.790518 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 04:52:56.790529 kernel: pnp: PnP ACPI: found 1 devices May 14 04:52:56.790536 kernel: NET: Registered PF_INET protocol family May 14 04:52:56.790543 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 04:52:56.790550 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 04:52:56.790557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 04:52:56.790565 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 04:52:56.790572 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 04:52:56.790581 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 04:52:56.790588 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 04:52:56.790595 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 04:52:56.790602 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 04:52:56.790609 kernel: PCI: CLS 0 bytes, default 64 May 14 04:52:56.790616 kernel: kvm [1]: HYP mode not available May 14 04:52:56.790623 kernel: Initialise system trusted keyrings May 14 04:52:56.790630 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 04:52:56.790637 kernel: Key type asymmetric registered May 14 04:52:56.790645 kernel: Asymmetric key parser 'x509' registered May 14 04:52:56.790713 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 04:52:56.790722 kernel: io scheduler mq-deadline registered May 14 04:52:56.790730 kernel: io scheduler kyber registered May 14 04:52:56.790737 kernel: io scheduler bfq registered May 14 04:52:56.790744 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 04:52:56.790751 kernel: ACPI: button: Power Button [PWRB] May 14 04:52:56.790759 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 04:52:56.790848 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 04:52:56.790864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 04:52:56.790871 kernel: thunder_xcv, ver 1.0 May 14 04:52:56.790879 kernel: thunder_bgx, ver 1.0 May 14 04:52:56.790886 kernel: nicpf, ver 1.0 May 14 04:52:56.790892 kernel: nicvf, ver 1.0 May 14 04:52:56.790969 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 04:52:56.791027 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T04:52:56 UTC (1747198376) May 14 04:52:56.791037 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 04:52:56.791044 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 04:52:56.791053 kernel: NET: Registered PF_INET6 protocol family May 14 04:52:56.791060 kernel: watchdog: NMI not fully supported May 14 04:52:56.791067 kernel: watchdog: Hard watchdog permanently disabled May 14 04:52:56.791074 kernel: Segment Routing with IPv6 May 14 04:52:56.791081 kernel: In-situ OAM (IOAM) with IPv6 May 14 04:52:56.791088 kernel: NET: Registered PF_PACKET protocol family May 14 04:52:56.791095 kernel: Key type dns_resolver registered May 14 04:52:56.791102 kernel: registered taskstats version 1 May 14 04:52:56.791109 kernel: Loading compiled-in X.509 certificates May 14 04:52:56.791117 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 9f54d711faad5edc118c062fcbac248335430a87' May 14 04:52:56.791124 kernel: Demotion targets for Node 0: null May 14 04:52:56.791131 kernel: Key type .fscrypt registered May 14 04:52:56.791138 kernel: Key type fscrypt-provisioning registered May 14 04:52:56.791145 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 04:52:56.791152 kernel: ima: Allocated hash algorithm: sha1 May 14 04:52:56.791160 kernel: ima: No architecture policies found May 14 04:52:56.791180 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 04:52:56.791188 kernel: clk: Disabling unused clocks May 14 04:52:56.791195 kernel: PM: genpd: Disabling unused power domains May 14 04:52:56.791203 kernel: Warning: unable to open an initial console. May 14 04:52:56.791210 kernel: Freeing unused kernel memory: 39424K May 14 04:52:56.791217 kernel: Run /init as init process May 14 04:52:56.791224 kernel: with arguments: May 14 04:52:56.791231 kernel: /init May 14 04:52:56.791238 kernel: with environment: May 14 04:52:56.791245 kernel: HOME=/ May 14 04:52:56.791252 kernel: TERM=linux May 14 04:52:56.791260 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 04:52:56.791268 systemd[1]: Successfully made /usr/ read-only. May 14 04:52:56.791278 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 04:52:56.791286 systemd[1]: Detected virtualization kvm. May 14 04:52:56.791293 systemd[1]: Detected architecture arm64. May 14 04:52:56.791300 systemd[1]: Running in initrd. May 14 04:52:56.791308 systemd[1]: No hostname configured, using default hostname. May 14 04:52:56.791317 systemd[1]: Hostname set to . May 14 04:52:56.791324 systemd[1]: Initializing machine ID from VM UUID. May 14 04:52:56.791365 systemd[1]: Queued start job for default target initrd.target. May 14 04:52:56.791374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:52:56.791382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:52:56.791390 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 04:52:56.791398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 04:52:56.791405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 04:52:56.791416 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 04:52:56.791425 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 04:52:56.791432 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 04:52:56.791440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:52:56.791448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 04:52:56.791455 systemd[1]: Reached target paths.target - Path Units. May 14 04:52:56.791463 systemd[1]: Reached target slices.target - Slice Units. May 14 04:52:56.791471 systemd[1]: Reached target swap.target - Swaps. May 14 04:52:56.791481 systemd[1]: Reached target timers.target - Timer Units. May 14 04:52:56.791489 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 04:52:56.791497 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 04:52:56.791504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 04:52:56.791512 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 04:52:56.791519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 04:52:56.791527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 04:52:56.791535 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:52:56.791543 systemd[1]: Reached target sockets.target - Socket Units. May 14 04:52:56.791550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 04:52:56.791558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 04:52:56.791565 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 04:52:56.791573 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 04:52:56.791580 systemd[1]: Starting systemd-fsck-usr.service... May 14 04:52:56.791588 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 04:52:56.791596 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 04:52:56.791604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:52:56.791612 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:52:56.791620 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 04:52:56.791628 systemd[1]: Finished systemd-fsck-usr.service. May 14 04:52:56.791636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 04:52:56.791662 systemd-journald[244]: Collecting audit messages is disabled. May 14 04:52:56.791681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:52:56.791689 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 04:52:56.791699 systemd-journald[244]: Journal started May 14 04:52:56.791717 systemd-journald[244]: Runtime Journal (/run/log/journal/b23157ff9d604fe9ae2efa402744d847) is 6M, max 48.5M, 42.4M free. May 14 04:52:56.779582 systemd-modules-load[245]: Inserted module 'overlay' May 14 04:52:56.794405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 04:52:56.794422 systemd[1]: Started systemd-journald.service - Journal Service. May 14 04:52:56.797517 systemd-modules-load[245]: Inserted module 'br_netfilter' May 14 04:52:56.798410 kernel: Bridge firewalling registered May 14 04:52:56.803310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 04:52:56.804761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 04:52:56.808934 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 04:52:56.810436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 04:52:56.820270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 04:52:56.823299 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 04:52:56.827311 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 04:52:56.828749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:52:56.830263 systemd-tmpfiles[278]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 04:52:56.833024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 04:52:56.835306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:52:56.842150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 04:52:56.852131 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 04:52:56.870077 systemd-resolved[289]: Positive Trust Anchors: May 14 04:52:56.870102 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 04:52:56.870136 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 04:52:56.874918 systemd-resolved[289]: Defaulting to hostname 'linux'. May 14 04:52:56.875954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 04:52:56.879263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 04:52:56.933190 kernel: SCSI subsystem initialized May 14 04:52:56.938187 kernel: Loading iSCSI transport class v2.0-870. May 14 04:52:56.947207 kernel: iscsi: registered transport (tcp) May 14 04:52:56.960285 kernel: iscsi: registered transport (qla4xxx) May 14 04:52:56.960327 kernel: QLogic iSCSI HBA Driver May 14 04:52:56.979703 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 04:52:56.996236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:52:57.000406 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 04:52:57.049235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 04:52:57.051303 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 04:52:57.118193 kernel: raid6: neonx8 gen() 10524 MB/s May 14 04:52:57.135188 kernel: raid6: neonx4 gen() 15760 MB/s May 14 04:52:57.152187 kernel: raid6: neonx2 gen() 13170 MB/s May 14 04:52:57.169179 kernel: raid6: neonx1 gen() 10381 MB/s May 14 04:52:57.186176 kernel: raid6: int64x8 gen() 6881 MB/s May 14 04:52:57.203178 kernel: raid6: int64x4 gen() 7325 MB/s May 14 04:52:57.220178 kernel: raid6: int64x2 gen() 6076 MB/s May 14 04:52:57.237188 kernel: raid6: int64x1 gen() 5037 MB/s May 14 04:52:57.237223 kernel: raid6: using algorithm neonx4 gen() 15760 MB/s May 14 04:52:57.254187 kernel: raid6: .... xor() 12342 MB/s, rmw enabled May 14 04:52:57.254207 kernel: raid6: using neon recovery algorithm May 14 04:52:57.259182 kernel: xor: measuring software checksum speed May 14 04:52:57.259195 kernel: 8regs : 21630 MB/sec May 14 04:52:57.260649 kernel: 32regs : 19759 MB/sec May 14 04:52:57.260671 kernel: arm64_neon : 28080 MB/sec May 14 04:52:57.260688 kernel: xor: using function: arm64_neon (28080 MB/sec) May 14 04:52:57.313182 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 04:52:57.319631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 04:52:57.322047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:52:57.352958 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 14 04:52:57.357030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:52:57.359385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 04:52:57.384620 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation May 14 04:52:57.405370 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 04:52:57.407641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 04:52:57.451212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:52:57.453397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 04:52:57.501940 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 04:52:57.511434 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 04:52:57.511525 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 04:52:57.511542 kernel: GPT:9289727 != 19775487 May 14 04:52:57.511551 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 04:52:57.511560 kernel: GPT:9289727 != 19775487 May 14 04:52:57.511568 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 04:52:57.511576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:52:57.503282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 04:52:57.503404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:52:57.513045 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:52:57.515864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:52:57.540301 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 04:52:57.542277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:52:57.550230 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 04:52:57.558803 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 04:52:57.566175 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 04:52:57.571929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 04:52:57.573036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 04:52:57.575263 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 04:52:57.577614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:52:57.579179 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 04:52:57.581745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 04:52:57.583512 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 04:52:57.613386 disk-uuid[590]: Primary Header is updated. May 14 04:52:57.613386 disk-uuid[590]: Secondary Entries is updated. May 14 04:52:57.613386 disk-uuid[590]: Secondary Header is updated. May 14 04:52:57.616520 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 04:52:57.618227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:52:58.625330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:52:58.628193 kernel: block device autoloading is deprecated and will be removed. May 14 04:52:58.628222 disk-uuid[594]: The operation has completed successfully. May 14 04:52:58.657017 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 04:52:58.657127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 04:52:58.684800 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 04:52:58.700678 sh[612]: Success May 14 04:52:58.715557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 04:52:58.715601 kernel: device-mapper: uevent: version 1.0.3 May 14 04:52:58.716442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 04:52:58.727189 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 04:52:58.750571 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 04:52:58.753082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 04:52:58.773881 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 04:52:58.781014 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 04:52:58.781044 kernel: BTRFS: device fsid 73dd31f4-39c4-4cc0-95ea-0c124bed739c devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) May 14 04:52:58.782731 kernel: BTRFS info (device dm-0): first mount of filesystem 73dd31f4-39c4-4cc0-95ea-0c124bed739c May 14 04:52:58.782757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 04:52:58.782767 kernel: BTRFS info (device dm-0): using free-space-tree May 14 04:52:58.786318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 04:52:58.787301 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 04:52:58.788402 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 04:52:58.789059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 04:52:58.808455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 04:52:58.821182 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (653) May 14 04:52:58.822754 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:52:58.822784 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:52:58.822795 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:52:58.832081 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 04:52:58.834389 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:52:58.834042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 04:52:58.892451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 04:52:58.897280 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 04:52:58.939553 systemd-networkd[796]: lo: Link UP May 14 04:52:58.939562 systemd-networkd[796]: lo: Gained carrier May 14 04:52:58.941256 systemd-networkd[796]: Enumeration completed May 14 04:52:58.943065 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 04:52:58.943501 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:52:58.943505 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 04:52:58.944188 systemd-networkd[796]: eth0: Link UP May 14 04:52:58.944191 systemd-networkd[796]: eth0: Gained carrier May 14 04:52:58.944202 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:52:58.945964 systemd[1]: Reached target network.target - Network. May 14 04:52:58.972212 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 04:52:58.979894 ignition[704]: Ignition 2.21.0 May 14 04:52:58.979906 ignition[704]: Stage: fetch-offline May 14 04:52:58.979931 ignition[704]: no configs at "/usr/lib/ignition/base.d" May 14 04:52:58.979939 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:52:58.980108 ignition[704]: parsed url from cmdline: "" May 14 04:52:58.980111 ignition[704]: no config URL provided May 14 04:52:58.980115 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" May 14 04:52:58.980121 ignition[704]: no config at "/usr/lib/ignition/user.ign" May 14 04:52:58.980137 ignition[704]: op(1): [started] loading QEMU firmware config module May 14 04:52:58.980141 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 04:52:58.991814 ignition[704]: op(1): [finished] loading QEMU firmware config module May 14 04:52:59.030282 ignition[704]: parsing config with SHA512: 3e27c187c02edff6a5de1016457142a1e82a62aca49e8a13320be1b84ee9f1818006030cd620dc720d103eeab502ec8f6be51ec6586f4e41686a68be54744216 May 14 04:52:59.037512 unknown[704]: fetched base config from "system" May 14 04:52:59.037524 unknown[704]: fetched user config from "qemu" May 14 04:52:59.037906 ignition[704]: fetch-offline: fetch-offline passed May 14 04:52:59.039642 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 04:52:59.037955 ignition[704]: Ignition finished successfully May 14 04:52:59.041527 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 04:52:59.042269 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 04:52:59.065869 ignition[811]: Ignition 2.21.0 May 14 04:52:59.065884 ignition[811]: Stage: kargs May 14 04:52:59.066006 ignition[811]: no configs at "/usr/lib/ignition/base.d" May 14 04:52:59.066014 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:52:59.068574 ignition[811]: kargs: kargs passed May 14 04:52:59.068632 ignition[811]: Ignition finished successfully May 14 04:52:59.073069 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 04:52:59.075353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 04:52:59.103480 ignition[819]: Ignition 2.21.0 May 14 04:52:59.103498 ignition[819]: Stage: disks May 14 04:52:59.103670 ignition[819]: no configs at "/usr/lib/ignition/base.d" May 14 04:52:59.103679 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:52:59.105656 ignition[819]: disks: disks passed May 14 04:52:59.107740 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 04:52:59.105761 ignition[819]: Ignition finished successfully May 14 04:52:59.109333 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 04:52:59.111101 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 04:52:59.112920 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 04:52:59.114832 systemd[1]: Reached target sysinit.target - System Initialization. May 14 04:52:59.116823 systemd[1]: Reached target basic.target - Basic System. May 14 04:52:59.119277 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 04:52:59.142081 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 04:52:59.146338 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 04:52:59.150139 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 04:52:59.221191 kernel: EXT4-fs (vda9): mounted filesystem 008d778b-58b1-4ebe-9d06-c739d7d81b3b r/w with ordered data mode. Quota mode: none. May 14 04:52:59.221726 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 04:52:59.222892 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 04:52:59.226719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 04:52:59.229033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 04:52:59.230061 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 04:52:59.230100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 04:52:59.230122 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 04:52:59.250676 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 04:52:59.253218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 04:52:59.256108 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (837) May 14 04:52:59.258470 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:52:59.258502 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:52:59.258550 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:52:59.263766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 04:52:59.303028 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 14 04:52:59.305961 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 14 04:52:59.308833 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 14 04:52:59.312043 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 14 04:52:59.386221 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 04:52:59.388143 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 04:52:59.389603 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 04:52:59.408389 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:52:59.419402 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 04:52:59.424756 ignition[950]: INFO : Ignition 2.21.0 May 14 04:52:59.424756 ignition[950]: INFO : Stage: mount May 14 04:52:59.426417 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:52:59.426417 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:52:59.429363 ignition[950]: INFO : mount: mount passed May 14 04:52:59.429363 ignition[950]: INFO : Ignition finished successfully May 14 04:52:59.428811 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 04:52:59.431050 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 04:52:59.780324 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 04:52:59.781907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 04:52:59.804494 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (963) May 14 04:52:59.804526 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:52:59.804536 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:52:59.805200 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:52:59.808219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 04:52:59.848580 ignition[980]: INFO : Ignition 2.21.0 May 14 04:52:59.848580 ignition[980]: INFO : Stage: files May 14 04:52:59.850860 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:52:59.850860 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:52:59.850860 ignition[980]: DEBUG : files: compiled without relabeling support, skipping May 14 04:52:59.854437 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 04:52:59.854437 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 04:52:59.854437 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 04:52:59.854437 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 04:52:59.854437 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 04:52:59.854213 unknown[980]: wrote ssh authorized keys file for user: core May 14 04:52:59.862142 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 04:52:59.862142 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 04:53:00.286269 systemd-networkd[796]: eth0: Gained IPv6LL May 14 04:53:00.970482 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 04:53:04.829881 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 04:53:04.829881 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 04:53:04.833190 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 04:53:05.198101 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 04:53:05.335809 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 04:53:05.338073 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 04:53:05.355078 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 04:53:05.355078 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 04:53:05.355078 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 04:53:05.602863 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 04:53:05.922709 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 04:53:05.922709 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 04:53:05.926207 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 04:53:05.928499 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 04:53:05.928499 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 04:53:05.928499 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 04:53:05.933411 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 04:53:05.933411 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 04:53:05.933411 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 04:53:05.933411 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 04:53:05.948778 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 04:53:05.953682 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 04:53:05.956320 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 04:53:05.956320 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 04:53:05.956320 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 04:53:05.956320 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 04:53:05.956320 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 04:53:05.956320 ignition[980]: INFO : files: files passed May 14 04:53:05.956320 ignition[980]: INFO : Ignition finished successfully May 14 04:53:05.956923 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 04:53:05.959854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 04:53:05.964349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 04:53:05.986961 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory May 14 04:53:05.985468 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 04:53:05.985568 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 04:53:05.990667 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 04:53:05.990667 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 04:53:05.994763 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 04:53:05.994218 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 04:53:05.996437 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 04:53:06.001301 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 04:53:06.035267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 04:53:06.035395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 04:53:06.037505 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 04:53:06.039272 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 04:53:06.041128 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 04:53:06.043607 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 04:53:06.060228 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 04:53:06.062511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 04:53:06.086811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 04:53:06.087985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:53:06.089967 systemd[1]: Stopped target timers.target - Timer Units. May 14 04:53:06.091661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 04:53:06.091779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 04:53:06.094069 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 04:53:06.096127 systemd[1]: Stopped target basic.target - Basic System. May 14 04:53:06.097790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 04:53:06.099465 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 04:53:06.101278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 04:53:06.103244 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 04:53:06.104338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 04:53:06.105375 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 04:53:06.106585 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 04:53:06.108300 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 04:53:06.110118 systemd[1]: Stopped target swap.target - Swaps. May 14 04:53:06.117234 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 04:53:06.117348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 04:53:06.119985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 04:53:06.122109 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:53:06.123959 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 04:53:06.124060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:53:06.125682 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 04:53:06.125798 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 04:53:06.132838 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 04:53:06.132951 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 04:53:06.134838 systemd[1]: Stopped target paths.target - Path Units. May 14 04:53:06.136457 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 04:53:06.137233 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:53:06.139587 systemd[1]: Stopped target slices.target - Slice Units. May 14 04:53:06.140971 systemd[1]: Stopped target sockets.target - Socket Units. May 14 04:53:06.142571 systemd[1]: iscsid.socket: Deactivated successfully. May 14 04:53:06.142652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 04:53:06.144121 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 04:53:06.144214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 04:53:06.145962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 04:53:06.146072 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 04:53:06.147733 systemd[1]: ignition-files.service: Deactivated successfully. May 14 04:53:06.147843 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 04:53:06.150126 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 04:53:06.151179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 04:53:06.151301 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:53:06.159478 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 04:53:06.160903 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 04:53:06.161015 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:53:06.162878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 04:53:06.162980 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 04:53:06.168925 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 04:53:06.169001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 04:53:06.174494 ignition[1036]: INFO : Ignition 2.21.0 May 14 04:53:06.174494 ignition[1036]: INFO : Stage: umount May 14 04:53:06.174494 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:53:06.174494 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:53:06.172712 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 04:53:06.178851 ignition[1036]: INFO : umount: umount passed May 14 04:53:06.178851 ignition[1036]: INFO : Ignition finished successfully May 14 04:53:06.176862 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 04:53:06.176934 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 04:53:06.179869 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 04:53:06.179944 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 04:53:06.181751 systemd[1]: Stopped target network.target - Network. May 14 04:53:06.182713 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 04:53:06.182768 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 04:53:06.184449 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 04:53:06.184489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 04:53:06.186107 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 04:53:06.186152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 04:53:06.187735 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 04:53:06.187772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 04:53:06.189230 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 04:53:06.189267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 04:53:06.190933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 04:53:06.192589 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 04:53:06.199347 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 04:53:06.200547 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 04:53:06.203273 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 04:53:06.203483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 04:53:06.203515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:53:06.206759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 04:53:06.206986 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 04:53:06.208220 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 04:53:06.210706 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 04:53:06.211053 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 04:53:06.212118 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 04:53:06.212177 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 04:53:06.214875 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 04:53:06.215814 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 04:53:06.215864 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 04:53:06.217903 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 04:53:06.217944 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 04:53:06.220872 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 04:53:06.220910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 04:53:06.222845 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:53:06.225352 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 04:53:06.232826 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 04:53:06.234292 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:53:06.236364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 04:53:06.236399 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 04:53:06.238095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 04:53:06.238122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:53:06.239862 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 04:53:06.239904 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 04:53:06.242508 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 04:53:06.242549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 04:53:06.245014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 04:53:06.245057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 04:53:06.248522 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 04:53:06.249738 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 04:53:06.249801 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:53:06.252628 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 04:53:06.252668 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:53:06.255697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 04:53:06.255747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:53:06.259504 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 04:53:06.259592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 04:53:06.263628 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 04:53:06.263716 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 04:53:06.265540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 04:53:06.267410 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 04:53:06.285923 systemd[1]: Switching root. May 14 04:53:06.321263 systemd-journald[244]: Journal stopped May 14 04:53:07.105510 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 14 04:53:07.105563 kernel: SELinux: policy capability network_peer_controls=1 May 14 04:53:07.105582 kernel: SELinux: policy capability open_perms=1 May 14 04:53:07.105591 kernel: SELinux: policy capability extended_socket_class=1 May 14 04:53:07.105602 kernel: SELinux: policy capability always_check_network=0 May 14 04:53:07.105616 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 04:53:07.105625 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 04:53:07.105639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 04:53:07.105648 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 04:53:07.105657 kernel: SELinux: policy capability userspace_initial_context=0 May 14 04:53:07.105666 kernel: audit: type=1403 audit(1747198386.536:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 04:53:07.105680 systemd[1]: Successfully loaded SELinux policy in 28.953ms. May 14 04:53:07.105697 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.091ms. May 14 04:53:07.105709 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 04:53:07.105720 systemd[1]: Detected virtualization kvm. May 14 04:53:07.105730 systemd[1]: Detected architecture arm64. May 14 04:53:07.105740 systemd[1]: Detected first boot. May 14 04:53:07.105750 systemd[1]: Initializing machine ID from VM UUID. May 14 04:53:07.105760 kernel: NET: Registered PF_VSOCK protocol family May 14 04:53:07.105770 zram_generator::config[1081]: No configuration found. May 14 04:53:07.105796 systemd[1]: Populated /etc with preset unit settings. May 14 04:53:07.105809 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 04:53:07.105819 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 04:53:07.105829 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 04:53:07.105838 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 04:53:07.105848 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 04:53:07.105869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 04:53:07.105880 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 04:53:07.105893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 04:53:07.105903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 04:53:07.105915 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 04:53:07.105925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 04:53:07.105935 systemd[1]: Created slice user.slice - User and Session Slice. May 14 04:53:07.105945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:53:07.105956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:53:07.105966 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 04:53:07.105976 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 04:53:07.105986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 04:53:07.105997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 04:53:07.106008 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 04:53:07.106019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:53:07.106029 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 04:53:07.106039 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 04:53:07.106049 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 04:53:07.106059 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 04:53:07.106070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 04:53:07.106081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:53:07.106091 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 04:53:07.106101 systemd[1]: Reached target slices.target - Slice Units. May 14 04:53:07.106111 systemd[1]: Reached target swap.target - Swaps. May 14 04:53:07.106121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 04:53:07.106131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 04:53:07.106141 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 04:53:07.106150 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 04:53:07.106214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 04:53:07.106228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:53:07.106241 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 04:53:07.106262 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 04:53:07.106272 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 04:53:07.106282 systemd[1]: Mounting media.mount - External Media Directory... May 14 04:53:07.106292 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 04:53:07.106302 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 04:53:07.106312 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 04:53:07.106323 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 04:53:07.106334 systemd[1]: Reached target machines.target - Containers. May 14 04:53:07.106344 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 04:53:07.106354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:53:07.106365 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 04:53:07.106376 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 04:53:07.106386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:53:07.106396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 04:53:07.106406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:53:07.106415 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 04:53:07.106427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:53:07.106438 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 04:53:07.106448 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 04:53:07.106458 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 04:53:07.106468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 04:53:07.106477 systemd[1]: Stopped systemd-fsck-usr.service. May 14 04:53:07.106492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:53:07.106502 kernel: loop: module loaded May 14 04:53:07.106513 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 04:53:07.106523 kernel: fuse: init (API version 7.41) May 14 04:53:07.106532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 04:53:07.106542 kernel: ACPI: bus type drm_connector registered May 14 04:53:07.106551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 04:53:07.106561 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 04:53:07.106572 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 04:53:07.106582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 04:53:07.106593 systemd[1]: verity-setup.service: Deactivated successfully. May 14 04:53:07.106603 systemd[1]: Stopped verity-setup.service. May 14 04:53:07.106613 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 04:53:07.106623 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 04:53:07.106632 systemd[1]: Mounted media.mount - External Media Directory. May 14 04:53:07.106642 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 04:53:07.106676 systemd-journald[1153]: Collecting audit messages is disabled. May 14 04:53:07.106697 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 04:53:07.106709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 04:53:07.106719 systemd-journald[1153]: Journal started May 14 04:53:07.106740 systemd-journald[1153]: Runtime Journal (/run/log/journal/b23157ff9d604fe9ae2efa402744d847) is 6M, max 48.5M, 42.4M free. May 14 04:53:06.899839 systemd[1]: Queued start job for default target multi-user.target. May 14 04:53:06.922917 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 04:53:06.923271 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 04:53:07.109184 systemd[1]: Started systemd-journald.service - Journal Service. May 14 04:53:07.109916 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 04:53:07.113192 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:53:07.114652 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 04:53:07.114823 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 04:53:07.117478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:53:07.117639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:53:07.118968 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 04:53:07.119107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 04:53:07.120551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:53:07.120710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:53:07.122112 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 04:53:07.122299 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 04:53:07.123687 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:53:07.123844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:53:07.125328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 04:53:07.126808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:53:07.128438 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 04:53:07.129898 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 04:53:07.142398 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 04:53:07.145144 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 04:53:07.147073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 04:53:07.148277 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 04:53:07.148313 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 04:53:07.150338 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 04:53:07.159886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 04:53:07.161219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:53:07.162441 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 04:53:07.164260 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 04:53:07.165502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 04:53:07.166397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 04:53:07.167475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 04:53:07.168408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 04:53:07.170624 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 04:53:07.175125 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 04:53:07.177956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:53:07.182114 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 04:53:07.184186 kernel: loop0: detected capacity change from 0 to 107312 May 14 04:53:07.184185 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 04:53:07.185869 systemd-journald[1153]: Time spent on flushing to /var/log/journal/b23157ff9d604fe9ae2efa402744d847 is 19.746ms for 891 entries. May 14 04:53:07.185869 systemd-journald[1153]: System Journal (/var/log/journal/b23157ff9d604fe9ae2efa402744d847) is 8M, max 195.6M, 187.6M free. May 14 04:53:07.213244 systemd-journald[1153]: Received client request to flush runtime journal. May 14 04:53:07.213297 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 04:53:07.187893 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 04:53:07.194301 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 04:53:07.198304 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 04:53:07.209673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 04:53:07.214479 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 04:53:07.224293 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 04:53:07.228187 kernel: loop1: detected capacity change from 0 to 201592 May 14 04:53:07.236504 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 04:53:07.239665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 04:53:07.252190 kernel: loop2: detected capacity change from 0 to 138376 May 14 04:53:07.268674 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 14 04:53:07.268690 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 14 04:53:07.272657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:53:07.278191 kernel: loop3: detected capacity change from 0 to 107312 May 14 04:53:07.284177 kernel: loop4: detected capacity change from 0 to 201592 May 14 04:53:07.291188 kernel: loop5: detected capacity change from 0 to 138376 May 14 04:53:07.299093 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 04:53:07.299465 (sd-merge)[1223]: Merged extensions into '/usr'. May 14 04:53:07.303146 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... May 14 04:53:07.303159 systemd[1]: Reloading... May 14 04:53:07.353193 zram_generator::config[1250]: No configuration found. May 14 04:53:07.429302 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 04:53:07.432814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:53:07.494644 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 04:53:07.494714 systemd[1]: Reloading finished in 191 ms. May 14 04:53:07.522536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 04:53:07.525541 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 04:53:07.539342 systemd[1]: Starting ensure-sysext.service... May 14 04:53:07.540935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 04:53:07.551756 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... May 14 04:53:07.551784 systemd[1]: Reloading... May 14 04:53:07.561412 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 04:53:07.561789 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 04:53:07.562020 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 04:53:07.562351 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 04:53:07.563022 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 04:53:07.563324 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 04:53:07.563436 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 04:53:07.566021 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 04:53:07.566112 systemd-tmpfiles[1284]: Skipping /boot May 14 04:53:07.575015 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 04:53:07.575028 systemd-tmpfiles[1284]: Skipping /boot May 14 04:53:07.602196 zram_generator::config[1311]: No configuration found. May 14 04:53:07.665120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:53:07.726287 systemd[1]: Reloading finished in 174 ms. May 14 04:53:07.748595 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 04:53:07.754498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:53:07.764289 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 04:53:07.766421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 04:53:07.768548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 04:53:07.771189 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 04:53:07.775893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:53:07.779470 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 04:53:07.793794 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 04:53:07.800054 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 04:53:07.802912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:53:07.809312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:53:07.813408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:53:07.817343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:53:07.818521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:53:07.818630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:53:07.818792 systemd-udevd[1352]: Using default interface naming scheme 'v255'. May 14 04:53:07.820733 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 04:53:07.823089 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 04:53:07.825021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:53:07.825225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:53:07.827282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:53:07.827474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:53:07.831754 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 04:53:07.840431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 04:53:07.842394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:53:07.844938 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:53:07.845095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:53:07.849715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:53:07.851832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:53:07.855727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:53:07.857051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:53:07.857179 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:53:07.860892 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 04:53:07.861914 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 04:53:07.868102 augenrules[1410]: No rules May 14 04:53:07.877663 systemd[1]: Finished ensure-sysext.service. May 14 04:53:07.878670 systemd[1]: audit-rules.service: Deactivated successfully. May 14 04:53:07.878866 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 04:53:07.883477 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 04:53:07.887899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:53:07.891769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 04:53:07.896231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:53:07.898917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:53:07.898964 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:53:07.900961 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 04:53:07.902059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 04:53:07.902501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:53:07.902662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:53:07.906717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:53:07.908210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:53:07.911108 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 04:53:07.911377 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 04:53:07.912938 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:53:07.913077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:53:07.918745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 04:53:07.918841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 04:53:07.943966 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 04:53:07.980544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 04:53:07.981660 systemd[1]: Reached target time-set.target - System Time Set. May 14 04:53:07.997221 systemd-networkd[1409]: lo: Link UP May 14 04:53:07.997229 systemd-networkd[1409]: lo: Gained carrier May 14 04:53:07.998441 systemd-networkd[1409]: Enumeration completed May 14 04:53:07.999860 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 04:53:08.001686 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:53:08.001699 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 04:53:08.002374 systemd-networkd[1409]: eth0: Link UP May 14 04:53:08.002479 systemd-networkd[1409]: eth0: Gained carrier May 14 04:53:08.002497 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:53:08.005116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 04:53:08.007719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 04:53:08.012850 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 04:53:08.015393 systemd-resolved[1350]: Positive Trust Anchors: May 14 04:53:08.015408 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 04:53:08.015440 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 04:53:08.018017 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 04:53:08.020239 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 04:53:08.022515 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 14 04:53:08.025010 systemd-resolved[1350]: Defaulting to hostname 'linux'. May 14 04:53:08.032287 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 04:53:08.032339 systemd-timesyncd[1430]: Initial clock synchronization to Wed 2025-05-14 04:53:08.353934 UTC. May 14 04:53:08.034917 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 04:53:08.038634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 04:53:08.040882 systemd[1]: Reached target network.target - Network. May 14 04:53:08.041808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 04:53:08.042962 systemd[1]: Reached target sysinit.target - System Initialization. May 14 04:53:08.044136 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 04:53:08.045353 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 04:53:08.046685 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 04:53:08.047839 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 04:53:08.049069 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 04:53:08.050326 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 04:53:08.050365 systemd[1]: Reached target paths.target - Path Units. May 14 04:53:08.053201 systemd[1]: Reached target timers.target - Timer Units. May 14 04:53:08.054926 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 04:53:08.057126 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 04:53:08.063413 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 04:53:08.065361 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 04:53:08.067299 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 04:53:08.077986 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 04:53:08.080642 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 04:53:08.084193 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 04:53:08.086417 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 04:53:08.089517 systemd[1]: Reached target sockets.target - Socket Units. May 14 04:53:08.090667 systemd[1]: Reached target basic.target - Basic System. May 14 04:53:08.091659 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 04:53:08.091745 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 04:53:08.092751 systemd[1]: Starting containerd.service - containerd container runtime... May 14 04:53:08.096278 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 04:53:08.099343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 04:53:08.113013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 04:53:08.114976 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 04:53:08.115960 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 04:53:08.117046 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 04:53:08.121340 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 04:53:08.123637 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 04:53:08.124805 jq[1469]: false May 14 04:53:08.125639 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 04:53:08.138469 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 04:53:08.140276 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 04:53:08.140416 extend-filesystems[1470]: Found loop3 May 14 04:53:08.140416 extend-filesystems[1470]: Found loop4 May 14 04:53:08.140416 extend-filesystems[1470]: Found loop5 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda May 14 04:53:08.145734 extend-filesystems[1470]: Found vda1 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda2 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda3 May 14 04:53:08.145734 extend-filesystems[1470]: Found usr May 14 04:53:08.145734 extend-filesystems[1470]: Found vda4 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda6 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda7 May 14 04:53:08.145734 extend-filesystems[1470]: Found vda9 May 14 04:53:08.145734 extend-filesystems[1470]: Checking size of /dev/vda9 May 14 04:53:08.140730 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 04:53:08.142227 systemd[1]: Starting update-engine.service - Update Engine... May 14 04:53:08.147626 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 04:53:08.163686 jq[1487]: true May 14 04:53:08.158201 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 04:53:08.160770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 04:53:08.161063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 04:53:08.161549 systemd[1]: motdgen.service: Deactivated successfully. May 14 04:53:08.161706 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 04:53:08.164400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 04:53:08.165287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 04:53:08.179320 extend-filesystems[1470]: Resized partition /dev/vda9 May 14 04:53:08.182062 extend-filesystems[1501]: resize2fs 1.47.2 (1-Jan-2025) May 14 04:53:08.184639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:53:08.190929 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 04:53:08.205663 jq[1491]: true May 14 04:53:08.207878 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 04:53:08.214200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 04:53:08.220593 update_engine[1483]: I20250514 04:53:08.219269 1483 main.cc:92] Flatcar Update Engine starting May 14 04:53:08.226787 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 04:53:08.226787 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 04:53:08.226787 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 04:53:08.231769 extend-filesystems[1470]: Resized filesystem in /dev/vda9 May 14 04:53:08.228653 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 04:53:08.234275 dbus-daemon[1467]: [system] SELinux support is enabled May 14 04:53:08.228859 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 04:53:08.234657 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 04:53:08.236216 tar[1490]: linux-arm64/LICENSE May 14 04:53:08.236216 tar[1490]: linux-arm64/helm May 14 04:53:08.238655 update_engine[1483]: I20250514 04:53:08.238613 1483 update_check_scheduler.cc:74] Next update check in 7m12s May 14 04:53:08.241132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 04:53:08.242184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 04:53:08.243473 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 04:53:08.243491 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 04:53:08.246312 systemd[1]: Started update-engine.service - Update Engine. May 14 04:53:08.250426 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 04:53:08.254953 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (Power Button) May 14 04:53:08.255144 systemd-logind[1481]: New seat seat0. May 14 04:53:08.255688 systemd[1]: Started systemd-logind.service - User Login Management. May 14 04:53:08.288926 bash[1529]: Updated "/home/core/.ssh/authorized_keys" May 14 04:53:08.291283 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 04:53:08.293992 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 04:53:08.328612 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 04:53:08.338287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:53:08.435993 containerd[1504]: time="2025-05-14T04:53:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 04:53:08.436636 containerd[1504]: time="2025-05-14T04:53:08.436489960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 04:53:08.449976 containerd[1504]: time="2025-05-14T04:53:08.449941960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.4µs" May 14 04:53:08.450066 containerd[1504]: time="2025-05-14T04:53:08.450051240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 04:53:08.450131 containerd[1504]: time="2025-05-14T04:53:08.450117840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 04:53:08.450356 containerd[1504]: time="2025-05-14T04:53:08.450336520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 04:53:08.450436 containerd[1504]: time="2025-05-14T04:53:08.450421040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 04:53:08.450499 containerd[1504]: time="2025-05-14T04:53:08.450486800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 04:53:08.450603 containerd[1504]: time="2025-05-14T04:53:08.450583320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 04:53:08.450654 containerd[1504]: time="2025-05-14T04:53:08.450641880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 04:53:08.450929 containerd[1504]: time="2025-05-14T04:53:08.450905040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 04:53:08.451001 containerd[1504]: time="2025-05-14T04:53:08.450987080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 04:53:08.451048 containerd[1504]: time="2025-05-14T04:53:08.451036640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 04:53:08.451093 containerd[1504]: time="2025-05-14T04:53:08.451081160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 04:53:08.451251 containerd[1504]: time="2025-05-14T04:53:08.451234680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 04:53:08.451504 containerd[1504]: time="2025-05-14T04:53:08.451481840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 04:53:08.451586 containerd[1504]: time="2025-05-14T04:53:08.451571640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 04:53:08.451634 containerd[1504]: time="2025-05-14T04:53:08.451621160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 04:53:08.451723 containerd[1504]: time="2025-05-14T04:53:08.451709480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 04:53:08.452037 containerd[1504]: time="2025-05-14T04:53:08.452019920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 04:53:08.452154 containerd[1504]: time="2025-05-14T04:53:08.452137440Z" level=info msg="metadata content store policy set" policy=shared May 14 04:53:08.455155 containerd[1504]: time="2025-05-14T04:53:08.455130320Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 04:53:08.455278 containerd[1504]: time="2025-05-14T04:53:08.455260920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 04:53:08.455359 containerd[1504]: time="2025-05-14T04:53:08.455346120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 04:53:08.455436 containerd[1504]: time="2025-05-14T04:53:08.455420720Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 04:53:08.455486 containerd[1504]: time="2025-05-14T04:53:08.455474800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 04:53:08.455536 containerd[1504]: time="2025-05-14T04:53:08.455524040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 04:53:08.455587 containerd[1504]: time="2025-05-14T04:53:08.455574960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 04:53:08.455639 containerd[1504]: time="2025-05-14T04:53:08.455626080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 04:53:08.455696 containerd[1504]: time="2025-05-14T04:53:08.455684280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 04:53:08.455746 containerd[1504]: time="2025-05-14T04:53:08.455734240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 04:53:08.455821 containerd[1504]: time="2025-05-14T04:53:08.455806800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 04:53:08.455872 containerd[1504]: time="2025-05-14T04:53:08.455860400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 04:53:08.456021 containerd[1504]: time="2025-05-14T04:53:08.456002040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 04:53:08.456090 containerd[1504]: time="2025-05-14T04:53:08.456075920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 04:53:08.456144 containerd[1504]: time="2025-05-14T04:53:08.456132360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 04:53:08.456223 containerd[1504]: time="2025-05-14T04:53:08.456209560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 04:53:08.456290 containerd[1504]: time="2025-05-14T04:53:08.456275800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 04:53:08.456341 containerd[1504]: time="2025-05-14T04:53:08.456327880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 04:53:08.456394 containerd[1504]: time="2025-05-14T04:53:08.456382720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 04:53:08.456450 containerd[1504]: time="2025-05-14T04:53:08.456437760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 04:53:08.456506 containerd[1504]: time="2025-05-14T04:53:08.456493840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 04:53:08.456554 containerd[1504]: time="2025-05-14T04:53:08.456543040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 04:53:08.456602 containerd[1504]: time="2025-05-14T04:53:08.456591120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 04:53:08.456852 containerd[1504]: time="2025-05-14T04:53:08.456835680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 04:53:08.456908 containerd[1504]: time="2025-05-14T04:53:08.456897560Z" level=info msg="Start snapshots syncer" May 14 04:53:08.456991 containerd[1504]: time="2025-05-14T04:53:08.456971160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 04:53:08.457286 containerd[1504]: time="2025-05-14T04:53:08.457250960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 04:53:08.457561 containerd[1504]: time="2025-05-14T04:53:08.457539680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 04:53:08.457717 containerd[1504]: time="2025-05-14T04:53:08.457700560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 04:53:08.457911 containerd[1504]: time="2025-05-14T04:53:08.457891200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 04:53:08.457995 containerd[1504]: time="2025-05-14T04:53:08.457980240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 04:53:08.458045 containerd[1504]: time="2025-05-14T04:53:08.458033640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 04:53:08.458106 containerd[1504]: time="2025-05-14T04:53:08.458092240Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 04:53:08.458158 containerd[1504]: time="2025-05-14T04:53:08.458146320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 04:53:08.458225 containerd[1504]: time="2025-05-14T04:53:08.458212440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 04:53:08.458273 containerd[1504]: time="2025-05-14T04:53:08.458261200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 04:53:08.458340 containerd[1504]: time="2025-05-14T04:53:08.458325680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 04:53:08.458390 containerd[1504]: time="2025-05-14T04:53:08.458378040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 04:53:08.458438 containerd[1504]: time="2025-05-14T04:53:08.458426080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 04:53:08.458556 containerd[1504]: time="2025-05-14T04:53:08.458538360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 04:53:08.458614 containerd[1504]: time="2025-05-14T04:53:08.458600280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 04:53:08.458660 containerd[1504]: time="2025-05-14T04:53:08.458647720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 04:53:08.458707 containerd[1504]: time="2025-05-14T04:53:08.458694960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 04:53:08.458750 containerd[1504]: time="2025-05-14T04:53:08.458739400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 04:53:08.458821 containerd[1504]: time="2025-05-14T04:53:08.458807800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 04:53:08.458880 containerd[1504]: time="2025-05-14T04:53:08.458867520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 04:53:08.458996 containerd[1504]: time="2025-05-14T04:53:08.458985200Z" level=info msg="runtime interface created" May 14 04:53:08.459188 containerd[1504]: time="2025-05-14T04:53:08.459026600Z" level=info msg="created NRI interface" May 14 04:53:08.459188 containerd[1504]: time="2025-05-14T04:53:08.459040520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 04:53:08.459188 containerd[1504]: time="2025-05-14T04:53:08.459054160Z" level=info msg="Connect containerd service" May 14 04:53:08.459188 containerd[1504]: time="2025-05-14T04:53:08.459083200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 04:53:08.459874 containerd[1504]: time="2025-05-14T04:53:08.459844840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 04:53:08.569056 containerd[1504]: time="2025-05-14T04:53:08.568939040Z" level=info msg="Start subscribing containerd event" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569185280Z" level=info msg="Start recovering state" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569275280Z" level=info msg="Start event monitor" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569288960Z" level=info msg="Start cni network conf syncer for default" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569301120Z" level=info msg="Start streaming server" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569309640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569317040Z" level=info msg="runtime interface starting up..." May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569326080Z" level=info msg="starting plugins..." May 14 04:53:08.569505 containerd[1504]: time="2025-05-14T04:53:08.569341520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 04:53:08.570038 containerd[1504]: time="2025-05-14T04:53:08.570016160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 04:53:08.570146 containerd[1504]: time="2025-05-14T04:53:08.570132880Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 04:53:08.570273 containerd[1504]: time="2025-05-14T04:53:08.570261320Z" level=info msg="containerd successfully booted in 0.134836s" May 14 04:53:08.570362 systemd[1]: Started containerd.service - containerd container runtime. May 14 04:53:08.670716 tar[1490]: linux-arm64/README.md May 14 04:53:08.686212 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 04:53:08.773808 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 04:53:08.791398 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 04:53:08.794243 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 04:53:08.820126 systemd[1]: issuegen.service: Deactivated successfully. May 14 04:53:08.822205 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 04:53:08.824548 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 04:53:08.848266 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 04:53:08.850722 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 04:53:08.852683 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 04:53:08.853889 systemd[1]: Reached target getty.target - Login Prompts. May 14 04:53:09.086714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 04:53:09.088821 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). May 14 04:53:09.161634 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:09.163236 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:09.169158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 04:53:09.170957 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 04:53:09.177378 systemd-logind[1481]: New session 1 of user core. May 14 04:53:09.197566 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 04:53:09.202191 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 04:53:09.221878 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 04:53:09.223941 systemd-logind[1481]: New session c1 of user core. May 14 04:53:09.337688 systemd[1585]: Queued start job for default target default.target. May 14 04:53:09.355990 systemd[1585]: Created slice app.slice - User Application Slice. May 14 04:53:09.356021 systemd[1585]: Reached target paths.target - Paths. May 14 04:53:09.356054 systemd[1585]: Reached target timers.target - Timers. May 14 04:53:09.357124 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 04:53:09.365371 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 04:53:09.365426 systemd[1585]: Reached target sockets.target - Sockets. May 14 04:53:09.365461 systemd[1585]: Reached target basic.target - Basic System. May 14 04:53:09.365488 systemd[1585]: Reached target default.target - Main User Target. May 14 04:53:09.365512 systemd[1585]: Startup finished in 136ms. May 14 04:53:09.365684 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 04:53:09.367923 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 04:53:09.432652 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:46650.service - OpenSSH per-connection server daemon (10.0.0.1:46650). May 14 04:53:09.484950 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 46650 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:09.486158 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:09.490262 systemd-logind[1481]: New session 2 of user core. May 14 04:53:09.496339 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 04:53:09.548110 sshd[1598]: Connection closed by 10.0.0.1 port 46650 May 14 04:53:09.548509 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 14 04:53:09.563171 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:46650.service: Deactivated successfully. May 14 04:53:09.565759 systemd[1]: session-2.scope: Deactivated successfully. May 14 04:53:09.566591 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. May 14 04:53:09.569818 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). May 14 04:53:09.571787 systemd-logind[1481]: Removed session 2. May 14 04:53:09.621302 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:09.622276 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:09.626617 systemd-logind[1481]: New session 3 of user core. May 14 04:53:09.631385 systemd-networkd[1409]: eth0: Gained IPv6LL May 14 04:53:09.632377 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 04:53:09.634647 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 04:53:09.636808 systemd[1]: Reached target network-online.target - Network is Online. May 14 04:53:09.639599 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 04:53:09.641744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:09.658074 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 04:53:09.678862 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 04:53:09.679061 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 04:53:09.680678 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 04:53:09.683990 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 04:53:09.712687 sshd[1614]: Connection closed by 10.0.0.1 port 46654 May 14 04:53:09.713353 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 14 04:53:09.717148 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:46654.service: Deactivated successfully. May 14 04:53:09.718586 systemd[1]: session-3.scope: Deactivated successfully. May 14 04:53:09.719216 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. May 14 04:53:09.720850 systemd-logind[1481]: Removed session 3. May 14 04:53:10.180676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:10.182328 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 04:53:10.183933 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:53:10.184115 systemd[1]: Startup finished in 2.075s (kernel) + 9.931s (initrd) + 3.683s (userspace) = 15.689s. May 14 04:53:10.590278 kubelet[1634]: E0514 04:53:10.588698 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:53:10.591146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:53:10.591299 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:53:10.591641 systemd[1]: kubelet.service: Consumed 764ms CPU time, 247M memory peak. May 14 04:53:19.937108 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:56428.service - OpenSSH per-connection server daemon (10.0.0.1:56428). May 14 04:53:19.996075 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 56428 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:19.997270 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.001457 systemd-logind[1481]: New session 4 of user core. May 14 04:53:20.012333 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 04:53:20.063914 sshd[1650]: Connection closed by 10.0.0.1 port 56428 May 14 04:53:20.064294 sshd-session[1648]: pam_unix(sshd:session): session closed for user core May 14 04:53:20.073074 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:56428.service: Deactivated successfully. May 14 04:53:20.076759 systemd[1]: session-4.scope: Deactivated successfully. May 14 04:53:20.077706 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. May 14 04:53:20.081144 systemd-logind[1481]: Removed session 4. May 14 04:53:20.082101 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). May 14 04:53:20.139755 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:20.140864 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.144265 systemd-logind[1481]: New session 5 of user core. May 14 04:53:20.153364 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 04:53:20.200358 sshd[1658]: Connection closed by 10.0.0.1 port 56436 May 14 04:53:20.200194 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 14 04:53:20.208869 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:56436.service: Deactivated successfully. May 14 04:53:20.210109 systemd[1]: session-5.scope: Deactivated successfully. May 14 04:53:20.211825 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. May 14 04:53:20.214574 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:56450.service - OpenSSH per-connection server daemon (10.0.0.1:56450). May 14 04:53:20.215140 systemd-logind[1481]: Removed session 5. May 14 04:53:20.267043 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 56450 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:20.268238 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.272388 systemd-logind[1481]: New session 6 of user core. May 14 04:53:20.285316 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 04:53:20.336361 sshd[1666]: Connection closed by 10.0.0.1 port 56450 May 14 04:53:20.336657 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 14 04:53:20.347052 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:56450.service: Deactivated successfully. May 14 04:53:20.348471 systemd[1]: session-6.scope: Deactivated successfully. May 14 04:53:20.350341 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. May 14 04:53:20.352502 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:56456.service - OpenSSH per-connection server daemon (10.0.0.1:56456). May 14 04:53:20.353190 systemd-logind[1481]: Removed session 6. May 14 04:53:20.401358 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:20.402450 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.406240 systemd-logind[1481]: New session 7 of user core. May 14 04:53:20.421306 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 04:53:20.481644 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 04:53:20.481897 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:53:20.496882 sudo[1675]: pam_unix(sudo:session): session closed for user root May 14 04:53:20.498696 sshd[1674]: Connection closed by 10.0.0.1 port 56456 May 14 04:53:20.498593 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 14 04:53:20.515300 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:56456.service: Deactivated successfully. May 14 04:53:20.517530 systemd[1]: session-7.scope: Deactivated successfully. May 14 04:53:20.519800 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. May 14 04:53:20.522455 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:56464.service - OpenSSH per-connection server daemon (10.0.0.1:56464). May 14 04:53:20.523374 systemd-logind[1481]: Removed session 7. May 14 04:53:20.581936 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 56464 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:20.583092 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.587228 systemd-logind[1481]: New session 8 of user core. May 14 04:53:20.597407 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 04:53:20.598116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 04:53:20.599344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:20.651619 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 04:53:20.651898 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:53:20.690742 sudo[1688]: pam_unix(sudo:session): session closed for user root May 14 04:53:20.695815 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 04:53:20.696082 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:53:20.705000 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 04:53:20.750397 augenrules[1714]: No rules May 14 04:53:20.751659 systemd[1]: audit-rules.service: Deactivated successfully. May 14 04:53:20.755336 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 04:53:20.756712 sudo[1687]: pam_unix(sudo:session): session closed for user root May 14 04:53:20.757325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:20.757929 sshd[1684]: Connection closed by 10.0.0.1 port 56464 May 14 04:53:20.759227 sshd-session[1681]: pam_unix(sshd:session): session closed for user core May 14 04:53:20.761123 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:53:20.763365 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:56464.service: Deactivated successfully. May 14 04:53:20.764582 systemd[1]: session-8.scope: Deactivated successfully. May 14 04:53:20.765831 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. May 14 04:53:20.767417 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:56468.service - OpenSSH per-connection server daemon (10.0.0.1:56468). May 14 04:53:20.768560 systemd-logind[1481]: Removed session 8. May 14 04:53:20.801395 kubelet[1719]: E0514 04:53:20.801340 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:53:20.804496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:53:20.804624 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:53:20.806247 systemd[1]: kubelet.service: Consumed 134ms CPU time, 102.1M memory peak. May 14 04:53:20.818823 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 56468 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:53:20.820032 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:53:20.823705 systemd-logind[1481]: New session 9 of user core. May 14 04:53:20.833347 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 04:53:20.882491 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 04:53:20.882729 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:53:21.247374 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 04:53:21.270445 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 04:53:21.547400 dockerd[1758]: time="2025-05-14T04:53:21.547267054Z" level=info msg="Starting up" May 14 04:53:21.548476 dockerd[1758]: time="2025-05-14T04:53:21.548441127Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 04:53:21.623611 dockerd[1758]: time="2025-05-14T04:53:21.623565671Z" level=info msg="Loading containers: start." May 14 04:53:21.634197 kernel: Initializing XFRM netlink socket May 14 04:53:21.837141 systemd-networkd[1409]: docker0: Link UP May 14 04:53:21.840281 dockerd[1758]: time="2025-05-14T04:53:21.840236739Z" level=info msg="Loading containers: done." May 14 04:53:21.856315 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3413301557-merged.mount: Deactivated successfully. May 14 04:53:21.858841 dockerd[1758]: time="2025-05-14T04:53:21.858512101Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 04:53:21.858841 dockerd[1758]: time="2025-05-14T04:53:21.858590814Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 04:53:21.858841 dockerd[1758]: time="2025-05-14T04:53:21.858685252Z" level=info msg="Initializing buildkit" May 14 04:53:21.878929 dockerd[1758]: time="2025-05-14T04:53:21.878892618Z" level=info msg="Completed buildkit initialization" May 14 04:53:21.885871 dockerd[1758]: time="2025-05-14T04:53:21.885831048Z" level=info msg="Daemon has completed initialization" May 14 04:53:21.886080 dockerd[1758]: time="2025-05-14T04:53:21.886030491Z" level=info msg="API listen on /run/docker.sock" May 14 04:53:21.886090 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 04:53:22.702032 containerd[1504]: time="2025-05-14T04:53:22.701995444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 04:53:23.325546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915958820.mount: Deactivated successfully. May 14 04:53:24.485998 containerd[1504]: time="2025-05-14T04:53:24.485912153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:24.486589 containerd[1504]: time="2025-05-14T04:53:24.486559527Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 14 04:53:24.487543 containerd[1504]: time="2025-05-14T04:53:24.487496906Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:24.489853 containerd[1504]: time="2025-05-14T04:53:24.489807859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:24.490971 containerd[1504]: time="2025-05-14T04:53:24.490939848Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.788879964s" May 14 04:53:24.491041 containerd[1504]: time="2025-05-14T04:53:24.490978457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 04:53:24.491715 containerd[1504]: time="2025-05-14T04:53:24.491647347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 04:53:25.849576 containerd[1504]: time="2025-05-14T04:53:25.849347675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:25.850358 containerd[1504]: time="2025-05-14T04:53:25.850135759Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 14 04:53:25.851030 containerd[1504]: time="2025-05-14T04:53:25.851002536Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:25.853444 containerd[1504]: time="2025-05-14T04:53:25.853419116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:25.854488 containerd[1504]: time="2025-05-14T04:53:25.854425351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.362748611s" May 14 04:53:25.854488 containerd[1504]: time="2025-05-14T04:53:25.854454770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 04:53:25.854940 containerd[1504]: time="2025-05-14T04:53:25.854921535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 04:53:27.043567 containerd[1504]: time="2025-05-14T04:53:27.043360680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:27.044394 containerd[1504]: time="2025-05-14T04:53:27.044091086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 14 04:53:27.045040 containerd[1504]: time="2025-05-14T04:53:27.045011460Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:27.047599 containerd[1504]: time="2025-05-14T04:53:27.047565792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:27.048580 containerd[1504]: time="2025-05-14T04:53:27.048546504Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.193597005s" May 14 04:53:27.048632 containerd[1504]: time="2025-05-14T04:53:27.048583679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 04:53:27.049050 containerd[1504]: time="2025-05-14T04:53:27.049015804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 04:53:28.020190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692053761.mount: Deactivated successfully. May 14 04:53:28.406570 containerd[1504]: time="2025-05-14T04:53:28.406525137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:28.407386 containerd[1504]: time="2025-05-14T04:53:28.407182180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 14 04:53:28.407976 containerd[1504]: time="2025-05-14T04:53:28.407940304Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:28.409786 containerd[1504]: time="2025-05-14T04:53:28.409749641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:28.410526 containerd[1504]: time="2025-05-14T04:53:28.410456964Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.361410372s" May 14 04:53:28.410526 containerd[1504]: time="2025-05-14T04:53:28.410486457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 04:53:28.411181 containerd[1504]: time="2025-05-14T04:53:28.411001009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 04:53:28.976685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415070075.mount: Deactivated successfully. May 14 04:53:29.877752 containerd[1504]: time="2025-05-14T04:53:29.877703182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:29.878342 containerd[1504]: time="2025-05-14T04:53:29.878313556Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 14 04:53:29.879233 containerd[1504]: time="2025-05-14T04:53:29.879207558Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:29.882098 containerd[1504]: time="2025-05-14T04:53:29.882069140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:29.883326 containerd[1504]: time="2025-05-14T04:53:29.883234655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.472202111s" May 14 04:53:29.883326 containerd[1504]: time="2025-05-14T04:53:29.883274004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 04:53:29.884036 containerd[1504]: time="2025-05-14T04:53:29.883821123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 04:53:30.342274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560362253.mount: Deactivated successfully. May 14 04:53:30.345426 containerd[1504]: time="2025-05-14T04:53:30.345390567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:53:30.346025 containerd[1504]: time="2025-05-14T04:53:30.345987858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 04:53:30.346590 containerd[1504]: time="2025-05-14T04:53:30.346566905Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:53:30.348412 containerd[1504]: time="2025-05-14T04:53:30.348373373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:53:30.349224 containerd[1504]: time="2025-05-14T04:53:30.349186709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.336709ms" May 14 04:53:30.349273 containerd[1504]: time="2025-05-14T04:53:30.349221233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 04:53:30.349797 containerd[1504]: time="2025-05-14T04:53:30.349769244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 04:53:30.842784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 04:53:30.844097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:30.850529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51595499.mount: Deactivated successfully. May 14 04:53:30.961379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:30.964240 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:53:31.062297 kubelet[2119]: E0514 04:53:31.062227 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:53:31.064851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:53:31.064976 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:53:31.066274 systemd[1]: kubelet.service: Consumed 134ms CPU time, 103.3M memory peak. May 14 04:53:32.931638 containerd[1504]: time="2025-05-14T04:53:32.931589650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:32.932215 containerd[1504]: time="2025-05-14T04:53:32.932182513Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 14 04:53:32.932801 containerd[1504]: time="2025-05-14T04:53:32.932755980Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:32.935678 containerd[1504]: time="2025-05-14T04:53:32.935630330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:53:32.936683 containerd[1504]: time="2025-05-14T04:53:32.936641933Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.586846709s" May 14 04:53:32.936683 containerd[1504]: time="2025-05-14T04:53:32.936675755Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 04:53:39.412239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:39.412369 systemd[1]: kubelet.service: Consumed 134ms CPU time, 103.3M memory peak. May 14 04:53:39.414196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:39.432504 systemd[1]: Reload requested from client PID 2200 ('systemctl') (unit session-9.scope)... May 14 04:53:39.432519 systemd[1]: Reloading... May 14 04:53:39.497184 zram_generator::config[2240]: No configuration found. May 14 04:53:39.597656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:53:39.679972 systemd[1]: Reloading finished in 247 ms. May 14 04:53:39.717620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:39.720019 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:39.720881 systemd[1]: kubelet.service: Deactivated successfully. May 14 04:53:39.722218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:39.722256 systemd[1]: kubelet.service: Consumed 84ms CPU time, 90.3M memory peak. May 14 04:53:39.724363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:39.831855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:39.834967 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 04:53:39.869610 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:53:39.869610 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 04:53:39.869610 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:53:39.869932 kubelet[2290]: I0514 04:53:39.869660 2290 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 04:53:40.648986 kubelet[2290]: I0514 04:53:40.648942 2290 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 04:53:40.648986 kubelet[2290]: I0514 04:53:40.648973 2290 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 04:53:40.649273 kubelet[2290]: I0514 04:53:40.649249 2290 server.go:954] "Client rotation is on, will bootstrap in background" May 14 04:53:40.684865 kubelet[2290]: E0514 04:53:40.684804 2290 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:40.685248 kubelet[2290]: I0514 04:53:40.685217 2290 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 04:53:40.695356 kubelet[2290]: I0514 04:53:40.695310 2290 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 04:53:40.698447 kubelet[2290]: I0514 04:53:40.698424 2290 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 04:53:40.699563 kubelet[2290]: I0514 04:53:40.699520 2290 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 04:53:40.699730 kubelet[2290]: I0514 04:53:40.699560 2290 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 04:53:40.699821 kubelet[2290]: I0514 04:53:40.699803 2290 topology_manager.go:138] "Creating topology manager with none policy" May 14 04:53:40.699821 kubelet[2290]: I0514 04:53:40.699813 2290 container_manager_linux.go:304] "Creating device plugin manager" May 14 04:53:40.700010 kubelet[2290]: I0514 04:53:40.699987 2290 state_mem.go:36] "Initialized new in-memory state store" May 14 04:53:40.704056 kubelet[2290]: I0514 04:53:40.704028 2290 kubelet.go:446] "Attempting to sync node with API server" May 14 04:53:40.704056 kubelet[2290]: I0514 04:53:40.704054 2290 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 04:53:40.704211 kubelet[2290]: I0514 04:53:40.704144 2290 kubelet.go:352] "Adding apiserver pod source" May 14 04:53:40.704211 kubelet[2290]: I0514 04:53:40.704173 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 04:53:40.709048 kubelet[2290]: W0514 04:53:40.709004 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused May 14 04:53:40.709192 kubelet[2290]: E0514 04:53:40.709147 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:40.709689 kubelet[2290]: W0514 04:53:40.709655 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused May 14 04:53:40.709743 kubelet[2290]: E0514 04:53:40.709701 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:40.712660 kubelet[2290]: I0514 04:53:40.712635 2290 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 04:53:40.713238 kubelet[2290]: I0514 04:53:40.713225 2290 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 04:53:40.713346 kubelet[2290]: W0514 04:53:40.713335 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 04:53:40.714211 kubelet[2290]: I0514 04:53:40.714197 2290 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 04:53:40.714251 kubelet[2290]: I0514 04:53:40.714227 2290 server.go:1287] "Started kubelet" May 14 04:53:40.714820 kubelet[2290]: I0514 04:53:40.714744 2290 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 04:53:40.715405 kubelet[2290]: I0514 04:53:40.715382 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 04:53:40.716241 kubelet[2290]: I0514 04:53:40.715596 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 04:53:40.716241 kubelet[2290]: I0514 04:53:40.715651 2290 server.go:490] "Adding debug handlers to kubelet server" May 14 04:53:40.716241 kubelet[2290]: I0514 04:53:40.715856 2290 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 04:53:40.716832 kubelet[2290]: I0514 04:53:40.716799 2290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 04:53:40.717756 kubelet[2290]: E0514 04:53:40.717732 2290 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 04:53:40.717828 kubelet[2290]: I0514 04:53:40.717762 2290 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 04:53:40.718217 kubelet[2290]: E0514 04:53:40.717954 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4bb4e2cb4568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 04:53:40.714210664 +0000 UTC m=+0.876273911,LastTimestamp:2025-05-14 04:53:40.714210664 +0000 UTC m=+0.876273911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 04:53:40.718293 kubelet[2290]: I0514 04:53:40.718283 2290 factory.go:221] Registration of the systemd container factory successfully May 14 04:53:40.718361 kubelet[2290]: I0514 04:53:40.718325 2290 reconciler.go:26] "Reconciler: start to sync state" May 14 04:53:40.718361 kubelet[2290]: I0514 04:53:40.718358 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 04:53:40.718470 kubelet[2290]: I0514 04:53:40.718285 2290 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 04:53:40.718535 kubelet[2290]: E0514 04:53:40.718499 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" May 14 04:53:40.719305 kubelet[2290]: W0514 04:53:40.719111 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused May 14 04:53:40.719305 kubelet[2290]: E0514 04:53:40.719212 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:40.719498 kubelet[2290]: I0514 04:53:40.719476 2290 factory.go:221] Registration of the containerd container factory successfully May 14 04:53:40.720198 kubelet[2290]: E0514 04:53:40.719626 2290 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 04:53:40.731222 kubelet[2290]: I0514 04:53:40.731204 2290 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 04:53:40.731222 kubelet[2290]: I0514 04:53:40.731216 2290 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 04:53:40.731319 kubelet[2290]: I0514 04:53:40.731233 2290 state_mem.go:36] "Initialized new in-memory state store" May 14 04:53:40.732589 kubelet[2290]: I0514 04:53:40.732534 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 04:53:40.733523 kubelet[2290]: I0514 04:53:40.733486 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 04:53:40.733523 kubelet[2290]: I0514 04:53:40.733509 2290 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 04:53:40.733523 kubelet[2290]: I0514 04:53:40.733526 2290 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 04:53:40.733623 kubelet[2290]: I0514 04:53:40.733533 2290 kubelet.go:2388] "Starting kubelet main sync loop" May 14 04:53:40.733623 kubelet[2290]: E0514 04:53:40.733571 2290 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 04:53:40.733940 kubelet[2290]: W0514 04:53:40.733917 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused May 14 04:53:40.733980 kubelet[2290]: E0514 04:53:40.733948 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:40.800590 kubelet[2290]: I0514 04:53:40.800553 2290 policy_none.go:49] "None policy: Start" May 14 04:53:40.800590 kubelet[2290]: I0514 04:53:40.800583 2290 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 04:53:40.800590 kubelet[2290]: I0514 04:53:40.800598 2290 state_mem.go:35] "Initializing new in-memory state store" May 14 04:53:40.805776 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 04:53:40.818223 kubelet[2290]: E0514 04:53:40.818195 2290 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 04:53:40.818677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 04:53:40.834467 kubelet[2290]: E0514 04:53:40.834449 2290 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 04:53:40.839392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 04:53:40.840645 kubelet[2290]: I0514 04:53:40.840405 2290 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 04:53:40.840830 kubelet[2290]: I0514 04:53:40.840811 2290 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 04:53:40.840913 kubelet[2290]: I0514 04:53:40.840832 2290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 04:53:40.841145 kubelet[2290]: I0514 04:53:40.841128 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 04:53:40.841848 kubelet[2290]: E0514 04:53:40.841824 2290 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 04:53:40.841888 kubelet[2290]: E0514 04:53:40.841880 2290 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 04:53:40.919111 kubelet[2290]: E0514 04:53:40.919077 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" May 14 04:53:40.942127 kubelet[2290]: I0514 04:53:40.942105 2290 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 04:53:40.942488 kubelet[2290]: E0514 04:53:40.942453 2290 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" May 14 04:53:41.041959 systemd[1]: Created slice kubepods-burstable-pod2f40a913be318ad79ac11d82b3e3f71e.slice - libcontainer container kubepods-burstable-pod2f40a913be318ad79ac11d82b3e3f71e.slice. May 14 04:53:41.066613 kubelet[2290]: E0514 04:53:41.066407 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.069741 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 04:53:41.071973 kubelet[2290]: E0514 04:53:41.071798 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.073334 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 04:53:41.074878 kubelet[2290]: E0514 04:53:41.074854 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.120208 kubelet[2290]: I0514 04:53:41.120151 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:41.120208 kubelet[2290]: I0514 04:53:41.120204 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:41.120373 kubelet[2290]: I0514 04:53:41.120237 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:41.120373 kubelet[2290]: I0514 04:53:41.120255 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:41.120373 kubelet[2290]: I0514 04:53:41.120273 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 04:53:41.120373 kubelet[2290]: I0514 04:53:41.120289 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:41.120373 kubelet[2290]: I0514 04:53:41.120304 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:41.120481 kubelet[2290]: I0514 04:53:41.120319 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:41.120481 kubelet[2290]: I0514 04:53:41.120358 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:41.144291 kubelet[2290]: I0514 04:53:41.144264 2290 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 04:53:41.144695 kubelet[2290]: E0514 04:53:41.144657 2290 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" May 14 04:53:41.320353 kubelet[2290]: E0514 04:53:41.320236 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" May 14 04:53:41.367827 kubelet[2290]: E0514 04:53:41.367792 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.368584 containerd[1504]: time="2025-05-14T04:53:41.368480211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f40a913be318ad79ac11d82b3e3f71e,Namespace:kube-system,Attempt:0,}" May 14 04:53:41.372104 kubelet[2290]: E0514 04:53:41.372081 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.372583 containerd[1504]: time="2025-05-14T04:53:41.372551969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 04:53:41.376104 kubelet[2290]: E0514 04:53:41.376079 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.378511 containerd[1504]: time="2025-05-14T04:53:41.378440845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 04:53:41.389912 containerd[1504]: time="2025-05-14T04:53:41.389867112Z" level=info msg="connecting to shim 9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083" address="unix:///run/containerd/s/978ad5791efa844e59e4b4fbc8062c46d1b294c602e3391a6b2a0dcfb7210afa" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:41.397130 containerd[1504]: time="2025-05-14T04:53:41.397096105Z" level=info msg="connecting to shim 99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca" address="unix:///run/containerd/s/5ee4e846f8aeacf274e80539f092fcefaa0988f383f5efa7c1c6bd418d4ff72f" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:41.406982 containerd[1504]: time="2025-05-14T04:53:41.406922416Z" level=info msg="connecting to shim 39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b" address="unix:///run/containerd/s/4201b93e3240e745ff8df99711ae4b09d928821a85e6d41de9c20ae3be3e0a08" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:41.418313 systemd[1]: Started cri-containerd-9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083.scope - libcontainer container 9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083. May 14 04:53:41.422410 systemd[1]: Started cri-containerd-99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca.scope - libcontainer container 99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca. May 14 04:53:41.426078 systemd[1]: Started cri-containerd-39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b.scope - libcontainer container 39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b. May 14 04:53:41.457313 containerd[1504]: time="2025-05-14T04:53:41.457261763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f40a913be318ad79ac11d82b3e3f71e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083\"" May 14 04:53:41.458891 kubelet[2290]: E0514 04:53:41.458861 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.460107 containerd[1504]: time="2025-05-14T04:53:41.460046814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca\"" May 14 04:53:41.460886 containerd[1504]: time="2025-05-14T04:53:41.460821529Z" level=info msg="CreateContainer within sandbox \"9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 04:53:41.461022 kubelet[2290]: E0514 04:53:41.460880 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.463031 containerd[1504]: time="2025-05-14T04:53:41.462989811Z" level=info msg="CreateContainer within sandbox \"99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 04:53:41.463820 containerd[1504]: time="2025-05-14T04:53:41.463792152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b\"" May 14 04:53:41.464806 kubelet[2290]: E0514 04:53:41.464789 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.466596 containerd[1504]: time="2025-05-14T04:53:41.466571397Z" level=info msg="CreateContainer within sandbox \"39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 04:53:41.468966 containerd[1504]: time="2025-05-14T04:53:41.468896503Z" level=info msg="Container f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384: CDI devices from CRI Config.CDIDevices: []" May 14 04:53:41.471748 containerd[1504]: time="2025-05-14T04:53:41.471720510Z" level=info msg="Container ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b: CDI devices from CRI Config.CDIDevices: []" May 14 04:53:41.475688 containerd[1504]: time="2025-05-14T04:53:41.475659666Z" level=info msg="CreateContainer within sandbox \"9276f102d42ae4da6c3c0f6f675cbfbeeab60babc12c421ab8a7577ab6800083\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384\"" May 14 04:53:41.476132 containerd[1504]: time="2025-05-14T04:53:41.476104637Z" level=info msg="Container 4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da: CDI devices from CRI Config.CDIDevices: []" May 14 04:53:41.476213 containerd[1504]: time="2025-05-14T04:53:41.476188554Z" level=info msg="StartContainer for \"f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384\"" May 14 04:53:41.477579 containerd[1504]: time="2025-05-14T04:53:41.477553534Z" level=info msg="connecting to shim f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384" address="unix:///run/containerd/s/978ad5791efa844e59e4b4fbc8062c46d1b294c602e3391a6b2a0dcfb7210afa" protocol=ttrpc version=3 May 14 04:53:41.479666 containerd[1504]: time="2025-05-14T04:53:41.479606229Z" level=info msg="CreateContainer within sandbox \"99a8df8efc7394769e3dbd87bc3b37fcb59330c5df14ede3042ac49ec0523fca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b\"" May 14 04:53:41.480229 containerd[1504]: time="2025-05-14T04:53:41.480004477Z" level=info msg="StartContainer for \"ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b\"" May 14 04:53:41.481100 containerd[1504]: time="2025-05-14T04:53:41.481069380Z" level=info msg="connecting to shim ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b" address="unix:///run/containerd/s/5ee4e846f8aeacf274e80539f092fcefaa0988f383f5efa7c1c6bd418d4ff72f" protocol=ttrpc version=3 May 14 04:53:41.481339 containerd[1504]: time="2025-05-14T04:53:41.481093482Z" level=info msg="CreateContainer within sandbox \"39edca5dda0d0d5c5a32d069a8f54e75e106a0b3126c68c7ba1f0a5ba0c23a7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da\"" May 14 04:53:41.481715 containerd[1504]: time="2025-05-14T04:53:41.481692075Z" level=info msg="StartContainer for \"4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da\"" May 14 04:53:41.482885 containerd[1504]: time="2025-05-14T04:53:41.482851945Z" level=info msg="connecting to shim 4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da" address="unix:///run/containerd/s/4201b93e3240e745ff8df99711ae4b09d928821a85e6d41de9c20ae3be3e0a08" protocol=ttrpc version=3 May 14 04:53:41.495321 systemd[1]: Started cri-containerd-f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384.scope - libcontainer container f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384. May 14 04:53:41.497910 systemd[1]: Started cri-containerd-ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b.scope - libcontainer container ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b. May 14 04:53:41.505929 systemd[1]: Started cri-containerd-4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da.scope - libcontainer container 4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da. May 14 04:53:41.541897 containerd[1504]: time="2025-05-14T04:53:41.540910018Z" level=info msg="StartContainer for \"ea2eab76ea95a626dabb1a50777021c068bd48bf36403faa52e4849e6754d09b\" returns successfully" May 14 04:53:41.545306 containerd[1504]: time="2025-05-14T04:53:41.542513978Z" level=info msg="StartContainer for \"f2e35bfd764e1b94865bf4066c7a936ef9372a722348972b415922a487749384\" returns successfully" May 14 04:53:41.545356 kubelet[2290]: W0514 04:53:41.545004 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused May 14 04:53:41.545356 kubelet[2290]: E0514 04:53:41.545262 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" May 14 04:53:41.547360 kubelet[2290]: I0514 04:53:41.546689 2290 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 04:53:41.547360 kubelet[2290]: E0514 04:53:41.547267 2290 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" May 14 04:53:41.554663 containerd[1504]: time="2025-05-14T04:53:41.554585201Z" level=info msg="StartContainer for \"4e97c9911a76137ac96f52d42d23a813d1c0cd2d96fcccc45fa7ab137922d1da\" returns successfully" May 14 04:53:41.745744 kubelet[2290]: E0514 04:53:41.745714 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.746044 kubelet[2290]: E0514 04:53:41.746027 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.746962 kubelet[2290]: E0514 04:53:41.746942 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.747244 kubelet[2290]: E0514 04:53:41.747210 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:41.749421 kubelet[2290]: E0514 04:53:41.749262 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:41.749502 kubelet[2290]: E0514 04:53:41.749490 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:42.351378 kubelet[2290]: I0514 04:53:42.351345 2290 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 04:53:42.754871 kubelet[2290]: E0514 04:53:42.754830 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:42.755006 kubelet[2290]: E0514 04:53:42.754954 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:42.755196 kubelet[2290]: E0514 04:53:42.755155 2290 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 04:53:42.755272 kubelet[2290]: E0514 04:53:42.755256 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:43.068342 kubelet[2290]: E0514 04:53:43.068072 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 04:53:43.123629 kubelet[2290]: E0514 04:53:43.123537 2290 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f4bb4e2cb4568 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 04:53:40.714210664 +0000 UTC m=+0.876273911,LastTimestamp:2025-05-14 04:53:40.714210664 +0000 UTC m=+0.876273911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 04:53:43.168811 kubelet[2290]: I0514 04:53:43.168773 2290 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 04:53:43.218870 kubelet[2290]: I0514 04:53:43.218833 2290 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 04:53:43.226402 kubelet[2290]: E0514 04:53:43.226374 2290 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 14 04:53:43.226402 kubelet[2290]: I0514 04:53:43.226398 2290 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 04:53:43.228362 kubelet[2290]: E0514 04:53:43.228336 2290 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 14 04:53:43.228362 kubelet[2290]: I0514 04:53:43.228358 2290 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 04:53:43.230064 kubelet[2290]: E0514 04:53:43.230036 2290 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 04:53:43.705939 kubelet[2290]: I0514 04:53:43.705856 2290 apiserver.go:52] "Watching apiserver" May 14 04:53:43.719420 kubelet[2290]: I0514 04:53:43.719390 2290 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 04:53:45.293493 systemd[1]: Reload requested from client PID 2565 ('systemctl') (unit session-9.scope)... May 14 04:53:45.293509 systemd[1]: Reloading... May 14 04:53:45.363193 zram_generator::config[2608]: No configuration found. May 14 04:53:45.428921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:53:45.527947 systemd[1]: Reloading finished in 234 ms. May 14 04:53:45.549286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:45.562522 systemd[1]: kubelet.service: Deactivated successfully. May 14 04:53:45.562720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:45.562758 systemd[1]: kubelet.service: Consumed 1.257s CPU time, 124.3M memory peak. May 14 04:53:45.565326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:53:45.710594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:53:45.713780 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 04:53:45.753982 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:53:45.753982 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 04:53:45.753982 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:53:45.754348 kubelet[2650]: I0514 04:53:45.753966 2650 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 04:53:45.763763 kubelet[2650]: I0514 04:53:45.763693 2650 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 04:53:45.763763 kubelet[2650]: I0514 04:53:45.763727 2650 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 04:53:45.764061 kubelet[2650]: I0514 04:53:45.764023 2650 server.go:954] "Client rotation is on, will bootstrap in background" May 14 04:53:45.765624 kubelet[2650]: I0514 04:53:45.765397 2650 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 04:53:45.767841 kubelet[2650]: I0514 04:53:45.767710 2650 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 04:53:45.774194 kubelet[2650]: I0514 04:53:45.774156 2650 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 04:53:45.778374 kubelet[2650]: I0514 04:53:45.778350 2650 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 04:53:45.778839 kubelet[2650]: I0514 04:53:45.778794 2650 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 04:53:45.779019 kubelet[2650]: I0514 04:53:45.778828 2650 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 04:53:45.779093 kubelet[2650]: I0514 04:53:45.779030 2650 topology_manager.go:138] "Creating topology manager with none policy" May 14 04:53:45.779093 kubelet[2650]: I0514 04:53:45.779040 2650 container_manager_linux.go:304] "Creating device plugin manager" May 14 04:53:45.779093 kubelet[2650]: I0514 04:53:45.779084 2650 state_mem.go:36] "Initialized new in-memory state store" May 14 04:53:45.779236 kubelet[2650]: I0514 04:53:45.779223 2650 kubelet.go:446] "Attempting to sync node with API server" May 14 04:53:45.779272 kubelet[2650]: I0514 04:53:45.779239 2650 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 04:53:45.779272 kubelet[2650]: I0514 04:53:45.779260 2650 kubelet.go:352] "Adding apiserver pod source" May 14 04:53:45.779312 kubelet[2650]: I0514 04:53:45.779273 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 04:53:45.779768 kubelet[2650]: I0514 04:53:45.779743 2650 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 04:53:45.782476 kubelet[2650]: I0514 04:53:45.782444 2650 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 04:53:45.782861 kubelet[2650]: I0514 04:53:45.782842 2650 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 04:53:45.782890 kubelet[2650]: I0514 04:53:45.782875 2650 server.go:1287] "Started kubelet" May 14 04:53:45.784466 kubelet[2650]: I0514 04:53:45.784446 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 04:53:45.786685 kubelet[2650]: I0514 04:53:45.786648 2650 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 04:53:45.787506 kubelet[2650]: I0514 04:53:45.787483 2650 server.go:490] "Adding debug handlers to kubelet server" May 14 04:53:45.789551 kubelet[2650]: I0514 04:53:45.789504 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 04:53:45.789702 kubelet[2650]: I0514 04:53:45.789686 2650 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 04:53:45.790788 kubelet[2650]: I0514 04:53:45.790768 2650 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 04:53:45.791066 kubelet[2650]: E0514 04:53:45.791043 2650 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 04:53:45.792286 kubelet[2650]: I0514 04:53:45.792270 2650 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 04:53:45.792455 kubelet[2650]: I0514 04:53:45.792445 2650 reconciler.go:26] "Reconciler: start to sync state" May 14 04:53:45.792607 kubelet[2650]: I0514 04:53:45.792573 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 04:53:45.795746 kubelet[2650]: I0514 04:53:45.795718 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 04:53:45.796242 kubelet[2650]: I0514 04:53:45.796088 2650 factory.go:221] Registration of the systemd container factory successfully May 14 04:53:45.796242 kubelet[2650]: I0514 04:53:45.796193 2650 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 04:53:45.799079 kubelet[2650]: I0514 04:53:45.799047 2650 factory.go:221] Registration of the containerd container factory successfully May 14 04:53:45.799820 kubelet[2650]: I0514 04:53:45.799752 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 04:53:45.799904 kubelet[2650]: I0514 04:53:45.799894 2650 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 04:53:45.799968 kubelet[2650]: I0514 04:53:45.799960 2650 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 04:53:45.800488 kubelet[2650]: I0514 04:53:45.800125 2650 kubelet.go:2388] "Starting kubelet main sync loop" May 14 04:53:45.800488 kubelet[2650]: E0514 04:53:45.800274 2650 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 04:53:45.800660 kubelet[2650]: E0514 04:53:45.800642 2650 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 04:53:45.830132 kubelet[2650]: I0514 04:53:45.830109 2650 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 04:53:45.830132 kubelet[2650]: I0514 04:53:45.830130 2650 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 04:53:45.830248 kubelet[2650]: I0514 04:53:45.830149 2650 state_mem.go:36] "Initialized new in-memory state store" May 14 04:53:45.830312 kubelet[2650]: I0514 04:53:45.830295 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 04:53:45.830336 kubelet[2650]: I0514 04:53:45.830312 2650 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 04:53:45.830336 kubelet[2650]: I0514 04:53:45.830329 2650 policy_none.go:49] "None policy: Start" May 14 04:53:45.830374 kubelet[2650]: I0514 04:53:45.830337 2650 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 04:53:45.830374 kubelet[2650]: I0514 04:53:45.830346 2650 state_mem.go:35] "Initializing new in-memory state store" May 14 04:53:45.830443 kubelet[2650]: I0514 04:53:45.830433 2650 state_mem.go:75] "Updated machine memory state" May 14 04:53:45.833860 kubelet[2650]: I0514 04:53:45.833842 2650 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 04:53:45.834250 kubelet[2650]: I0514 04:53:45.834219 2650 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 04:53:45.834342 kubelet[2650]: I0514 04:53:45.834315 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 04:53:45.834534 kubelet[2650]: I0514 04:53:45.834515 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 04:53:45.835720 kubelet[2650]: E0514 04:53:45.835606 2650 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 04:53:45.901136 kubelet[2650]: I0514 04:53:45.901104 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 04:53:45.901136 kubelet[2650]: I0514 04:53:45.901120 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 04:53:45.901431 kubelet[2650]: I0514 04:53:45.901405 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 04:53:45.938354 kubelet[2650]: I0514 04:53:45.938334 2650 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 04:53:45.944862 kubelet[2650]: I0514 04:53:45.944837 2650 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 04:53:45.944929 kubelet[2650]: I0514 04:53:45.944911 2650 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 04:53:46.093804 kubelet[2650]: I0514 04:53:46.093718 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:46.093804 kubelet[2650]: I0514 04:53:46.093755 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:46.093804 kubelet[2650]: I0514 04:53:46.093789 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:46.093912 kubelet[2650]: I0514 04:53:46.093807 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:46.093912 kubelet[2650]: I0514 04:53:46.093823 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:46.093912 kubelet[2650]: I0514 04:53:46.093839 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f40a913be318ad79ac11d82b3e3f71e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f40a913be318ad79ac11d82b3e3f71e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:53:46.093912 kubelet[2650]: I0514 04:53:46.093881 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:46.093912 kubelet[2650]: I0514 04:53:46.093903 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:53:46.094010 kubelet[2650]: I0514 04:53:46.093917 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 04:53:46.211429 kubelet[2650]: E0514 04:53:46.211382 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.211429 kubelet[2650]: E0514 04:53:46.211407 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.211574 kubelet[2650]: E0514 04:53:46.211512 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.301105 sudo[2685]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 04:53:46.301672 sudo[2685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 04:53:46.740064 sudo[2685]: pam_unix(sudo:session): session closed for user root May 14 04:53:46.780943 kubelet[2650]: I0514 04:53:46.780554 2650 apiserver.go:52] "Watching apiserver" May 14 04:53:46.793072 kubelet[2650]: I0514 04:53:46.793045 2650 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 04:53:46.816013 kubelet[2650]: E0514 04:53:46.815982 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.816898 kubelet[2650]: I0514 04:53:46.816641 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 04:53:46.816898 kubelet[2650]: E0514 04:53:46.816765 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.822256 kubelet[2650]: E0514 04:53:46.822224 2650 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 04:53:46.822375 kubelet[2650]: E0514 04:53:46.822361 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:46.852958 kubelet[2650]: I0514 04:53:46.852900 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.852882794 podStartE2EDuration="1.852882794s" podCreationTimestamp="2025-05-14 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:53:46.840105048 +0000 UTC m=+1.121364901" watchObservedRunningTime="2025-05-14 04:53:46.852882794 +0000 UTC m=+1.134142607" May 14 04:53:46.860787 kubelet[2650]: I0514 04:53:46.860142 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.860128924 podStartE2EDuration="1.860128924s" podCreationTimestamp="2025-05-14 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:53:46.853375537 +0000 UTC m=+1.134635390" watchObservedRunningTime="2025-05-14 04:53:46.860128924 +0000 UTC m=+1.141388777" May 14 04:53:46.861250 kubelet[2650]: I0514 04:53:46.860948 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8604190059999999 podStartE2EDuration="1.860419006s" podCreationTimestamp="2025-05-14 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:53:46.860112552 +0000 UTC m=+1.141372405" watchObservedRunningTime="2025-05-14 04:53:46.860419006 +0000 UTC m=+1.141678859" May 14 04:53:47.817862 kubelet[2650]: E0514 04:53:47.817712 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:47.817862 kubelet[2650]: E0514 04:53:47.817805 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:48.126622 sudo[1737]: pam_unix(sudo:session): session closed for user root May 14 04:53:48.128200 sshd[1736]: Connection closed by 10.0.0.1 port 56468 May 14 04:53:48.128582 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 14 04:53:48.132268 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:56468.service: Deactivated successfully. May 14 04:53:48.134434 systemd[1]: session-9.scope: Deactivated successfully. May 14 04:53:48.134716 systemd[1]: session-9.scope: Consumed 8.291s CPU time, 265.7M memory peak. May 14 04:53:48.135796 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. May 14 04:53:48.136738 systemd-logind[1481]: Removed session 9. May 14 04:53:50.146329 kubelet[2650]: I0514 04:53:50.146296 2650 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 04:53:50.147220 containerd[1504]: time="2025-05-14T04:53:50.147104345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 04:53:50.147794 kubelet[2650]: I0514 04:53:50.147313 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 04:53:51.094601 systemd[1]: Created slice kubepods-besteffort-pod3116583e_a074_4349_bf30_de8399b59f4b.slice - libcontainer container kubepods-besteffort-pod3116583e_a074_4349_bf30_de8399b59f4b.slice. May 14 04:53:51.111920 systemd[1]: Created slice kubepods-burstable-pod79e9619e_5dda_4811_bd8c_4c40226ec37e.slice - libcontainer container kubepods-burstable-pod79e9619e_5dda_4811_bd8c_4c40226ec37e.slice. May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130332 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-bpf-maps\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130371 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-hostproc\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130389 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-xtables-lock\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130404 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e9619e-5dda-4811-bd8c-4c40226ec37e-clustermesh-secrets\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130421 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3116583e-a074-4349-bf30-de8399b59f4b-xtables-lock\") pod \"kube-proxy-6wzdx\" (UID: \"3116583e-a074-4349-bf30-de8399b59f4b\") " pod="kube-system/kube-proxy-6wzdx" May 14 04:53:51.130469 kubelet[2650]: I0514 04:53:51.130445 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3116583e-a074-4349-bf30-de8399b59f4b-lib-modules\") pod \"kube-proxy-6wzdx\" (UID: \"3116583e-a074-4349-bf30-de8399b59f4b\") " pod="kube-system/kube-proxy-6wzdx" May 14 04:53:51.130701 kubelet[2650]: I0514 04:53:51.130481 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxxx\" (UniqueName: \"kubernetes.io/projected/3116583e-a074-4349-bf30-de8399b59f4b-kube-api-access-jbxxx\") pod \"kube-proxy-6wzdx\" (UID: \"3116583e-a074-4349-bf30-de8399b59f4b\") " pod="kube-system/kube-proxy-6wzdx" May 14 04:53:51.130701 kubelet[2650]: I0514 04:53:51.130514 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-run\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130701 kubelet[2650]: I0514 04:53:51.130532 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-cgroup\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130701 kubelet[2650]: I0514 04:53:51.130548 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-config-path\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130701 kubelet[2650]: I0514 04:53:51.130583 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3116583e-a074-4349-bf30-de8399b59f4b-kube-proxy\") pod \"kube-proxy-6wzdx\" (UID: \"3116583e-a074-4349-bf30-de8399b59f4b\") " pod="kube-system/kube-proxy-6wzdx" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130598 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-net\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130614 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-lib-modules\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130630 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-hubble-tls\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130648 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pss2l\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-kube-api-access-pss2l\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130667 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-etc-cni-netd\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130804 kubelet[2650]: I0514 04:53:51.130690 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-kernel\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.130924 kubelet[2650]: I0514 04:53:51.130709 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cni-path\") pod \"cilium-rxsqt\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " pod="kube-system/cilium-rxsqt" May 14 04:53:51.231871 kubelet[2650]: I0514 04:53:51.231725 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsws7\" (UniqueName: \"kubernetes.io/projected/04f21fc0-710b-423a-9741-d0e509f46bc3-kube-api-access-wsws7\") pod \"cilium-operator-6c4d7847fc-8sqmd\" (UID: \"04f21fc0-710b-423a-9741-d0e509f46bc3\") " pod="kube-system/cilium-operator-6c4d7847fc-8sqmd" May 14 04:53:51.231871 kubelet[2650]: I0514 04:53:51.231776 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21fc0-710b-423a-9741-d0e509f46bc3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8sqmd\" (UID: \"04f21fc0-710b-423a-9741-d0e509f46bc3\") " pod="kube-system/cilium-operator-6c4d7847fc-8sqmd" May 14 04:53:51.233907 systemd[1]: Created slice kubepods-besteffort-pod04f21fc0_710b_423a_9741_d0e509f46bc3.slice - libcontainer container kubepods-besteffort-pod04f21fc0_710b_423a_9741_d0e509f46bc3.slice. May 14 04:53:51.369657 kubelet[2650]: E0514 04:53:51.369550 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.406213 kubelet[2650]: E0514 04:53:51.405864 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.406422 containerd[1504]: time="2025-05-14T04:53:51.406373162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wzdx,Uid:3116583e-a074-4349-bf30-de8399b59f4b,Namespace:kube-system,Attempt:0,}" May 14 04:53:51.415114 kubelet[2650]: E0514 04:53:51.415086 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.415669 containerd[1504]: time="2025-05-14T04:53:51.415625975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxsqt,Uid:79e9619e-5dda-4811-bd8c-4c40226ec37e,Namespace:kube-system,Attempt:0,}" May 14 04:53:51.510656 containerd[1504]: time="2025-05-14T04:53:51.510033384Z" level=info msg="connecting to shim 0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:51.510656 containerd[1504]: time="2025-05-14T04:53:51.510285358Z" level=info msg="connecting to shim 11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621" address="unix:///run/containerd/s/671a15f5a153d30075439ffcdba5efffdf1072ebcc1e54191e09a608081857c7" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:51.538318 systemd[1]: Started cri-containerd-0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503.scope - libcontainer container 0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503. May 14 04:53:51.539587 kubelet[2650]: E0514 04:53:51.539344 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.539968 systemd[1]: Started cri-containerd-11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621.scope - libcontainer container 11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621. May 14 04:53:51.540421 containerd[1504]: time="2025-05-14T04:53:51.540386605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8sqmd,Uid:04f21fc0-710b-423a-9741-d0e509f46bc3,Namespace:kube-system,Attempt:0,}" May 14 04:53:51.567329 containerd[1504]: time="2025-05-14T04:53:51.567128661Z" level=info msg="connecting to shim d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35" address="unix:///run/containerd/s/d594b7a4655f3539f65fa36560c6d164a8dc310b56c4d4197ea94861a81c4588" namespace=k8s.io protocol=ttrpc version=3 May 14 04:53:51.571664 containerd[1504]: time="2025-05-14T04:53:51.571462292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxsqt,Uid:79e9619e-5dda-4811-bd8c-4c40226ec37e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\"" May 14 04:53:51.572613 kubelet[2650]: E0514 04:53:51.572586 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.573979 containerd[1504]: time="2025-05-14T04:53:51.573953860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 04:53:51.581411 containerd[1504]: time="2025-05-14T04:53:51.581371774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6wzdx,Uid:3116583e-a074-4349-bf30-de8399b59f4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621\"" May 14 04:53:51.581922 kubelet[2650]: E0514 04:53:51.581893 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.584661 containerd[1504]: time="2025-05-14T04:53:51.584590170Z" level=info msg="CreateContainer within sandbox \"11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 04:53:51.598299 containerd[1504]: time="2025-05-14T04:53:51.598264340Z" level=info msg="Container 504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136: CDI devices from CRI Config.CDIDevices: []" May 14 04:53:51.599352 systemd[1]: Started cri-containerd-d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35.scope - libcontainer container d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35. May 14 04:53:51.607377 containerd[1504]: time="2025-05-14T04:53:51.607325651Z" level=info msg="CreateContainer within sandbox \"11be263c6bc648351f93187371a8f656c869dae9fdecc654f81570af766db621\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136\"" May 14 04:53:51.608256 containerd[1504]: time="2025-05-14T04:53:51.608036349Z" level=info msg="StartContainer for \"504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136\"" May 14 04:53:51.609450 containerd[1504]: time="2025-05-14T04:53:51.609398195Z" level=info msg="connecting to shim 504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136" address="unix:///run/containerd/s/671a15f5a153d30075439ffcdba5efffdf1072ebcc1e54191e09a608081857c7" protocol=ttrpc version=3 May 14 04:53:51.631367 systemd[1]: Started cri-containerd-504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136.scope - libcontainer container 504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136. May 14 04:53:51.639171 containerd[1504]: time="2025-05-14T04:53:51.639118920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8sqmd,Uid:04f21fc0-710b-423a-9741-d0e509f46bc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\"" May 14 04:53:51.639799 kubelet[2650]: E0514 04:53:51.639770 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.668398 containerd[1504]: time="2025-05-14T04:53:51.668358187Z" level=info msg="StartContainer for \"504403132b9793dbc42fb5de998b2cccab3117142e42cda0fdf43b516c4af136\" returns successfully" May 14 04:53:51.828542 kubelet[2650]: E0514 04:53:51.828512 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.830423 kubelet[2650]: E0514 04:53:51.828627 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:51.845220 kubelet[2650]: I0514 04:53:51.844471 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6wzdx" podStartSLOduration=0.844453824 podStartE2EDuration="844.453824ms" podCreationTimestamp="2025-05-14 04:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:53:51.844215857 +0000 UTC m=+6.125475710" watchObservedRunningTime="2025-05-14 04:53:51.844453824 +0000 UTC m=+6.125713677" May 14 04:53:52.830190 kubelet[2650]: E0514 04:53:52.830147 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:52.982338 kubelet[2650]: E0514 04:53:52.982259 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:53.494274 update_engine[1483]: I20250514 04:53:53.494207 1483 update_attempter.cc:509] Updating boot flags... May 14 04:53:53.831274 kubelet[2650]: E0514 04:53:53.831063 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:54.986717 kubelet[2650]: E0514 04:53:54.986654 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:53:55.834267 kubelet[2650]: E0514 04:53:55.834179 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:00.542965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319578204.mount: Deactivated successfully. May 14 04:54:01.823516 containerd[1504]: time="2025-05-14T04:54:01.823469925Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:54:01.824210 containerd[1504]: time="2025-05-14T04:54:01.824177478Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 04:54:01.825155 containerd[1504]: time="2025-05-14T04:54:01.825108103Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:54:01.827122 containerd[1504]: time="2025-05-14T04:54:01.827092315Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.252927225s" May 14 04:54:01.827122 containerd[1504]: time="2025-05-14T04:54:01.827124846Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 04:54:01.838193 containerd[1504]: time="2025-05-14T04:54:01.838151307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 04:54:01.846278 containerd[1504]: time="2025-05-14T04:54:01.846247807Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 04:54:01.853915 containerd[1504]: time="2025-05-14T04:54:01.853875072Z" level=info msg="Container 33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:01.854355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944079591.mount: Deactivated successfully. May 14 04:54:01.859015 containerd[1504]: time="2025-05-14T04:54:01.858982990Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\"" May 14 04:54:01.862603 containerd[1504]: time="2025-05-14T04:54:01.862576690Z" level=info msg="StartContainer for \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\"" May 14 04:54:01.864181 containerd[1504]: time="2025-05-14T04:54:01.864003199Z" level=info msg="connecting to shim 33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" protocol=ttrpc version=3 May 14 04:54:01.909399 systemd[1]: Started cri-containerd-33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca.scope - libcontainer container 33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca. May 14 04:54:01.943443 containerd[1504]: time="2025-05-14T04:54:01.943408559Z" level=info msg="StartContainer for \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" returns successfully" May 14 04:54:01.980650 systemd[1]: cri-containerd-33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca.scope: Deactivated successfully. May 14 04:54:02.014581 containerd[1504]: time="2025-05-14T04:54:02.014529691Z" level=info msg="received exit event container_id:\"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" id:\"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" pid:3089 exited_at:{seconds:1747198442 nanos:2028321}" May 14 04:54:02.020173 containerd[1504]: time="2025-05-14T04:54:02.020060629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" id:\"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" pid:3089 exited_at:{seconds:1747198442 nanos:2028321}" May 14 04:54:02.054620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca-rootfs.mount: Deactivated successfully. May 14 04:54:02.848873 kubelet[2650]: E0514 04:54:02.848724 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:02.852744 containerd[1504]: time="2025-05-14T04:54:02.852654428Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 04:54:02.869810 containerd[1504]: time="2025-05-14T04:54:02.868497888Z" level=info msg="Container b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:02.876585 containerd[1504]: time="2025-05-14T04:54:02.876465313Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\"" May 14 04:54:02.877182 containerd[1504]: time="2025-05-14T04:54:02.876934781Z" level=info msg="StartContainer for \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\"" May 14 04:54:02.879005 containerd[1504]: time="2025-05-14T04:54:02.877797492Z" level=info msg="connecting to shim b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" protocol=ttrpc version=3 May 14 04:54:02.907389 systemd[1]: Started cri-containerd-b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2.scope - libcontainer container b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2. May 14 04:54:02.932443 containerd[1504]: time="2025-05-14T04:54:02.932400016Z" level=info msg="StartContainer for \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" returns successfully" May 14 04:54:02.952181 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 04:54:02.952690 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 04:54:02.952889 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 04:54:02.954211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 04:54:02.955910 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 04:54:02.956389 systemd[1]: cri-containerd-b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2.scope: Deactivated successfully. May 14 04:54:02.961871 containerd[1504]: time="2025-05-14T04:54:02.961817863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" id:\"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" pid:3135 exited_at:{seconds:1747198442 nanos:961284655}" May 14 04:54:02.969969 containerd[1504]: time="2025-05-14T04:54:02.969917849Z" level=info msg="received exit event container_id:\"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" id:\"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" pid:3135 exited_at:{seconds:1747198442 nanos:961284655}" May 14 04:54:02.987720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 04:54:03.859239 kubelet[2650]: E0514 04:54:03.859113 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:03.861349 containerd[1504]: time="2025-05-14T04:54:03.861300350Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 04:54:03.868407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2-rootfs.mount: Deactivated successfully. May 14 04:54:03.878855 containerd[1504]: time="2025-05-14T04:54:03.878818665Z" level=info msg="Container 0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:03.899160 containerd[1504]: time="2025-05-14T04:54:03.899110935Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\"" May 14 04:54:03.899732 containerd[1504]: time="2025-05-14T04:54:03.899706634Z" level=info msg="StartContainer for \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\"" May 14 04:54:03.901271 containerd[1504]: time="2025-05-14T04:54:03.901200004Z" level=info msg="connecting to shim 0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" protocol=ttrpc version=3 May 14 04:54:03.922410 systemd[1]: Started cri-containerd-0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a.scope - libcontainer container 0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a. May 14 04:54:03.982365 containerd[1504]: time="2025-05-14T04:54:03.982335435Z" level=info msg="StartContainer for \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" returns successfully" May 14 04:54:03.996078 systemd[1]: cri-containerd-0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a.scope: Deactivated successfully. May 14 04:54:03.998198 containerd[1504]: time="2025-05-14T04:54:03.998144515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" id:\"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" pid:3185 exited_at:{seconds:1747198443 nanos:997816417}" May 14 04:54:04.008635 containerd[1504]: time="2025-05-14T04:54:04.008577365Z" level=info msg="received exit event container_id:\"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" id:\"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" pid:3185 exited_at:{seconds:1747198443 nanos:997816417}" May 14 04:54:04.863773 kubelet[2650]: E0514 04:54:04.863702 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:04.865952 containerd[1504]: time="2025-05-14T04:54:04.865906806Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 04:54:04.868374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a-rootfs.mount: Deactivated successfully. May 14 04:54:04.878348 containerd[1504]: time="2025-05-14T04:54:04.878298424Z" level=info msg="Container 223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:04.885121 containerd[1504]: time="2025-05-14T04:54:04.885087184Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\"" May 14 04:54:04.885779 containerd[1504]: time="2025-05-14T04:54:04.885749455Z" level=info msg="StartContainer for \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\"" May 14 04:54:04.886686 containerd[1504]: time="2025-05-14T04:54:04.886637551Z" level=info msg="connecting to shim 223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" protocol=ttrpc version=3 May 14 04:54:04.910320 systemd[1]: Started cri-containerd-223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356.scope - libcontainer container 223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356. May 14 04:54:04.932369 systemd[1]: cri-containerd-223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356.scope: Deactivated successfully. May 14 04:54:04.935635 containerd[1504]: time="2025-05-14T04:54:04.935598247Z" level=info msg="StartContainer for \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" returns successfully" May 14 04:54:04.942199 containerd[1504]: time="2025-05-14T04:54:04.942144777Z" level=info msg="received exit event container_id:\"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" id:\"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" pid:3225 exited_at:{seconds:1747198444 nanos:941968606}" May 14 04:54:04.942333 containerd[1504]: time="2025-05-14T04:54:04.942298221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" id:\"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" pid:3225 exited_at:{seconds:1747198444 nanos:941968606}" May 14 04:54:04.957022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356-rootfs.mount: Deactivated successfully. May 14 04:54:05.870919 kubelet[2650]: E0514 04:54:05.870576 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:05.881188 containerd[1504]: time="2025-05-14T04:54:05.880447852Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 04:54:05.895955 containerd[1504]: time="2025-05-14T04:54:05.895900534Z" level=info msg="Container 8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:05.898836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655905748.mount: Deactivated successfully. May 14 04:54:05.903051 containerd[1504]: time="2025-05-14T04:54:05.903017706Z" level=info msg="CreateContainer within sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\"" May 14 04:54:05.903569 containerd[1504]: time="2025-05-14T04:54:05.903545332Z" level=info msg="StartContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\"" May 14 04:54:05.904361 containerd[1504]: time="2025-05-14T04:54:05.904332470Z" level=info msg="connecting to shim 8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1" address="unix:///run/containerd/s/cda42deb83ac735bac25fa433aaf798ceeefc9336264d54b1478feefa4e88fd7" protocol=ttrpc version=3 May 14 04:54:05.926303 systemd[1]: Started cri-containerd-8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1.scope - libcontainer container 8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1. May 14 04:54:05.953605 containerd[1504]: time="2025-05-14T04:54:05.953558790Z" level=info msg="StartContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" returns successfully" May 14 04:54:06.072001 containerd[1504]: time="2025-05-14T04:54:06.070858978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" id:\"a27033dcce510416612b4cac3989e261f7f831a2bcb5f2fe02f66a1d15d262ba\" pid:3296 exited_at:{seconds:1747198446 nanos:69947376}" May 14 04:54:06.099547 kubelet[2650]: I0514 04:54:06.099036 2650 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 04:54:06.160962 systemd[1]: Created slice kubepods-burstable-pod0591752b_9204_4392_bea4_2bb6fcd82530.slice - libcontainer container kubepods-burstable-pod0591752b_9204_4392_bea4_2bb6fcd82530.slice. May 14 04:54:06.166793 systemd[1]: Created slice kubepods-burstable-pod6acd8318_f446_4653_a6c9_8d9eb1de3687.slice - libcontainer container kubepods-burstable-pod6acd8318_f446_4653_a6c9_8d9eb1de3687.slice. May 14 04:54:06.233303 kubelet[2650]: I0514 04:54:06.233255 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0591752b-9204-4392-bea4-2bb6fcd82530-config-volume\") pod \"coredns-668d6bf9bc-slj7k\" (UID: \"0591752b-9204-4392-bea4-2bb6fcd82530\") " pod="kube-system/coredns-668d6bf9bc-slj7k" May 14 04:54:06.233303 kubelet[2650]: I0514 04:54:06.233296 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbcj\" (UniqueName: \"kubernetes.io/projected/0591752b-9204-4392-bea4-2bb6fcd82530-kube-api-access-9xbcj\") pod \"coredns-668d6bf9bc-slj7k\" (UID: \"0591752b-9204-4392-bea4-2bb6fcd82530\") " pod="kube-system/coredns-668d6bf9bc-slj7k" May 14 04:54:06.233440 kubelet[2650]: I0514 04:54:06.233319 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7vh\" (UniqueName: \"kubernetes.io/projected/6acd8318-f446-4653-a6c9-8d9eb1de3687-kube-api-access-sb7vh\") pod \"coredns-668d6bf9bc-slkzn\" (UID: \"6acd8318-f446-4653-a6c9-8d9eb1de3687\") " pod="kube-system/coredns-668d6bf9bc-slkzn" May 14 04:54:06.233440 kubelet[2650]: I0514 04:54:06.233398 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6acd8318-f446-4653-a6c9-8d9eb1de3687-config-volume\") pod \"coredns-668d6bf9bc-slkzn\" (UID: \"6acd8318-f446-4653-a6c9-8d9eb1de3687\") " pod="kube-system/coredns-668d6bf9bc-slkzn" May 14 04:54:06.465590 kubelet[2650]: E0514 04:54:06.465157 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:06.466085 containerd[1504]: time="2025-05-14T04:54:06.466029487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-slj7k,Uid:0591752b-9204-4392-bea4-2bb6fcd82530,Namespace:kube-system,Attempt:0,}" May 14 04:54:06.472013 kubelet[2650]: E0514 04:54:06.471986 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:06.475065 containerd[1504]: time="2025-05-14T04:54:06.473533524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-slkzn,Uid:6acd8318-f446-4653-a6c9-8d9eb1de3687,Namespace:kube-system,Attempt:0,}" May 14 04:54:06.808812 containerd[1504]: time="2025-05-14T04:54:06.808389457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:54:06.809259 containerd[1504]: time="2025-05-14T04:54:06.809233562Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 04:54:06.810061 containerd[1504]: time="2025-05-14T04:54:06.809980521Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:54:06.811408 containerd[1504]: time="2025-05-14T04:54:06.811237255Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.973039213s" May 14 04:54:06.811408 containerd[1504]: time="2025-05-14T04:54:06.811270064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 04:54:06.814516 containerd[1504]: time="2025-05-14T04:54:06.814486120Z" level=info msg="CreateContainer within sandbox \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 04:54:06.819935 containerd[1504]: time="2025-05-14T04:54:06.819886918Z" level=info msg="Container f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:06.824889 containerd[1504]: time="2025-05-14T04:54:06.824846038Z" level=info msg="CreateContainer within sandbox \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\"" May 14 04:54:06.825413 containerd[1504]: time="2025-05-14T04:54:06.825383501Z" level=info msg="StartContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\"" May 14 04:54:06.826143 containerd[1504]: time="2025-05-14T04:54:06.826077966Z" level=info msg="connecting to shim f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9" address="unix:///run/containerd/s/d594b7a4655f3539f65fa36560c6d164a8dc310b56c4d4197ea94861a81c4588" protocol=ttrpc version=3 May 14 04:54:06.856311 systemd[1]: Started cri-containerd-f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9.scope - libcontainer container f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9. May 14 04:54:06.880331 kubelet[2650]: E0514 04:54:06.879368 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:06.886388 containerd[1504]: time="2025-05-14T04:54:06.885992354Z" level=info msg="StartContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" returns successfully" May 14 04:54:07.882394 kubelet[2650]: E0514 04:54:07.882263 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:07.883959 kubelet[2650]: E0514 04:54:07.883706 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:07.892898 kubelet[2650]: I0514 04:54:07.892549 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rxsqt" podStartSLOduration=6.631386774 podStartE2EDuration="16.892535051s" podCreationTimestamp="2025-05-14 04:53:51 +0000 UTC" firstStartedPulling="2025-05-14 04:53:51.573607075 +0000 UTC m=+5.854866928" lastFinishedPulling="2025-05-14 04:54:01.834755312 +0000 UTC m=+16.116015205" observedRunningTime="2025-05-14 04:54:06.898963167 +0000 UTC m=+21.180223020" watchObservedRunningTime="2025-05-14 04:54:07.892535051 +0000 UTC m=+22.173794904" May 14 04:54:07.892898 kubelet[2650]: I0514 04:54:07.892714 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8sqmd" podStartSLOduration=1.72152012 podStartE2EDuration="16.892710216s" podCreationTimestamp="2025-05-14 04:53:51 +0000 UTC" firstStartedPulling="2025-05-14 04:53:51.640888223 +0000 UTC m=+5.922148076" lastFinishedPulling="2025-05-14 04:54:06.812078319 +0000 UTC m=+21.093338172" observedRunningTime="2025-05-14 04:54:07.891331143 +0000 UTC m=+22.172590956" watchObservedRunningTime="2025-05-14 04:54:07.892710216 +0000 UTC m=+22.173970069" May 14 04:54:08.883924 kubelet[2650]: E0514 04:54:08.883694 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:08.883924 kubelet[2650]: E0514 04:54:08.883854 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:11.019998 systemd-networkd[1409]: cilium_host: Link UP May 14 04:54:11.020107 systemd-networkd[1409]: cilium_net: Link UP May 14 04:54:11.020261 systemd-networkd[1409]: cilium_net: Gained carrier May 14 04:54:11.020376 systemd-networkd[1409]: cilium_host: Gained carrier May 14 04:54:11.107019 systemd-networkd[1409]: cilium_vxlan: Link UP May 14 04:54:11.107026 systemd-networkd[1409]: cilium_vxlan: Gained carrier May 14 04:54:11.401197 kernel: NET: Registered PF_ALG protocol family May 14 04:54:11.494578 systemd-networkd[1409]: cilium_net: Gained IPv6LL May 14 04:54:11.583394 systemd-networkd[1409]: cilium_host: Gained IPv6LL May 14 04:54:11.979786 systemd-networkd[1409]: lxc_health: Link UP May 14 04:54:11.980114 systemd-networkd[1409]: lxc_health: Gained carrier May 14 04:54:12.120326 systemd-networkd[1409]: lxc08a32ae0874d: Link UP May 14 04:54:12.130210 kernel: eth0: renamed from tmp3ed67 May 14 04:54:12.135240 kernel: eth0: renamed from tmp85b03 May 14 04:54:12.135821 systemd-networkd[1409]: tmp85b03: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 04:54:12.135884 systemd-networkd[1409]: tmp85b03: Cannot enable IPv6, ignoring: No such file or directory May 14 04:54:12.135896 systemd-networkd[1409]: tmp85b03: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory May 14 04:54:12.135905 systemd-networkd[1409]: tmp85b03: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory May 14 04:54:12.135916 systemd-networkd[1409]: tmp85b03: Cannot set IPv6 proxy NDP, ignoring: No such file or directory May 14 04:54:12.135927 systemd-networkd[1409]: tmp85b03: Cannot enable promote_secondaries for interface, ignoring: No such file or directory May 14 04:54:12.136825 systemd-networkd[1409]: lxc142d86248fb8: Link UP May 14 04:54:12.137064 systemd-networkd[1409]: lxc08a32ae0874d: Gained carrier May 14 04:54:12.137920 systemd-networkd[1409]: lxc142d86248fb8: Gained carrier May 14 04:54:12.350388 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL May 14 04:54:12.708645 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:44046.service - OpenSSH per-connection server daemon (10.0.0.1:44046). May 14 04:54:12.767444 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 44046 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:12.768777 sshd-session[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:12.773610 systemd-logind[1481]: New session 10 of user core. May 14 04:54:12.783325 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 04:54:12.926310 sshd[3821]: Connection closed by 10.0.0.1 port 44046 May 14 04:54:12.926793 sshd-session[3819]: pam_unix(sshd:session): session closed for user core May 14 04:54:12.931361 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. May 14 04:54:12.931652 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:44046.service: Deactivated successfully. May 14 04:54:12.933512 systemd[1]: session-10.scope: Deactivated successfully. May 14 04:54:12.935508 systemd-logind[1481]: Removed session 10. May 14 04:54:13.118615 systemd-networkd[1409]: lxc_health: Gained IPv6LL May 14 04:54:13.420722 kubelet[2650]: E0514 04:54:13.420698 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:13.439992 systemd-networkd[1409]: lxc142d86248fb8: Gained IPv6LL May 14 04:54:13.694518 systemd-networkd[1409]: lxc08a32ae0874d: Gained IPv6LL May 14 04:54:14.255059 kubelet[2650]: I0514 04:54:14.254544 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:54:14.255059 kubelet[2650]: E0514 04:54:14.254979 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:14.897829 kubelet[2650]: E0514 04:54:14.897772 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:15.674662 containerd[1504]: time="2025-05-14T04:54:15.674615452Z" level=info msg="connecting to shim 3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2" address="unix:///run/containerd/s/250e8a39a6b9ad97f77c87fd1e900a6d5eebaf3a401e46ee1ece0d96a1260606" namespace=k8s.io protocol=ttrpc version=3 May 14 04:54:15.676738 containerd[1504]: time="2025-05-14T04:54:15.676710419Z" level=info msg="connecting to shim 85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5" address="unix:///run/containerd/s/8409a9b2f17a1f951761925db8ed2559c36c9d6c9c4ca7dd96d620d3b885fb2e" namespace=k8s.io protocol=ttrpc version=3 May 14 04:54:15.706396 systemd[1]: Started cri-containerd-3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2.scope - libcontainer container 3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2. May 14 04:54:15.707749 systemd[1]: Started cri-containerd-85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5.scope - libcontainer container 85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5. May 14 04:54:15.719378 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:54:15.721371 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:54:15.739318 containerd[1504]: time="2025-05-14T04:54:15.739227556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-slkzn,Uid:6acd8318-f446-4653-a6c9-8d9eb1de3687,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2\"" May 14 04:54:15.739910 kubelet[2650]: E0514 04:54:15.739887 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:15.750463 containerd[1504]: time="2025-05-14T04:54:15.750410967Z" level=info msg="CreateContainer within sandbox \"3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 04:54:15.750777 containerd[1504]: time="2025-05-14T04:54:15.750754594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-slj7k,Uid:0591752b-9204-4392-bea4-2bb6fcd82530,Namespace:kube-system,Attempt:0,} returns sandbox id \"85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5\"" May 14 04:54:15.751392 kubelet[2650]: E0514 04:54:15.751365 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:15.754696 containerd[1504]: time="2025-05-14T04:54:15.754666513Z" level=info msg="CreateContainer within sandbox \"85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 04:54:15.761641 containerd[1504]: time="2025-05-14T04:54:15.761599739Z" level=info msg="Container 3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:15.764585 containerd[1504]: time="2025-05-14T04:54:15.764554633Z" level=info msg="Container abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf: CDI devices from CRI Config.CDIDevices: []" May 14 04:54:15.768256 containerd[1504]: time="2025-05-14T04:54:15.768224345Z" level=info msg="CreateContainer within sandbox \"3ed67bab5b4105a31d8e0ae7d43d63986b735436ad9bbb4c27dd410c7aa431e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed\"" May 14 04:54:15.768725 containerd[1504]: time="2025-05-14T04:54:15.768699357Z" level=info msg="StartContainer for \"3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed\"" May 14 04:54:15.771788 containerd[1504]: time="2025-05-14T04:54:15.771689018Z" level=info msg="connecting to shim 3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed" address="unix:///run/containerd/s/250e8a39a6b9ad97f77c87fd1e900a6d5eebaf3a401e46ee1ece0d96a1260606" protocol=ttrpc version=3 May 14 04:54:15.773525 containerd[1504]: time="2025-05-14T04:54:15.773483326Z" level=info msg="CreateContainer within sandbox \"85b037ab5d2d505d962dc3ae6ef158c307eee38d52ea8a5d8f1d8b6b776662e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf\"" May 14 04:54:15.773941 containerd[1504]: time="2025-05-14T04:54:15.773912810Z" level=info msg="StartContainer for \"abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf\"" May 14 04:54:15.776925 containerd[1504]: time="2025-05-14T04:54:15.776882306Z" level=info msg="connecting to shim abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf" address="unix:///run/containerd/s/8409a9b2f17a1f951761925db8ed2559c36c9d6c9c4ca7dd96d620d3b885fb2e" protocol=ttrpc version=3 May 14 04:54:15.798349 systemd[1]: Started cri-containerd-3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed.scope - libcontainer container 3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed. May 14 04:54:15.802106 systemd[1]: Started cri-containerd-abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf.scope - libcontainer container abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf. May 14 04:54:15.836079 containerd[1504]: time="2025-05-14T04:54:15.835989541Z" level=info msg="StartContainer for \"3b76d80bc77db533e3ea882b7838aaa55533cf89ef27ff0125e32caaae74f7ed\" returns successfully" May 14 04:54:15.860539 containerd[1504]: time="2025-05-14T04:54:15.860499260Z" level=info msg="StartContainer for \"abf969baf45f45f23bfb5d9e17d286f2582036e1dd227505dc183de71d5ba4cf\" returns successfully" May 14 04:54:15.906012 kubelet[2650]: E0514 04:54:15.905781 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:15.908837 kubelet[2650]: E0514 04:54:15.908760 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:15.924293 kubelet[2650]: I0514 04:54:15.924239 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-slj7k" podStartSLOduration=24.924223471 podStartE2EDuration="24.924223471s" podCreationTimestamp="2025-05-14 04:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:54:15.922551346 +0000 UTC m=+30.203811239" watchObservedRunningTime="2025-05-14 04:54:15.924223471 +0000 UTC m=+30.205483364" May 14 04:54:15.938836 kubelet[2650]: I0514 04:54:15.937910 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-slkzn" podStartSLOduration=24.937895685 podStartE2EDuration="24.937895685s" podCreationTimestamp="2025-05-14 04:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:54:15.936869966 +0000 UTC m=+30.218129859" watchObservedRunningTime="2025-05-14 04:54:15.937895685 +0000 UTC m=+30.219155538" May 14 04:54:16.661900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197804025.mount: Deactivated successfully. May 14 04:54:16.911615 kubelet[2650]: E0514 04:54:16.911573 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:16.912560 kubelet[2650]: E0514 04:54:16.912470 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:17.913331 kubelet[2650]: E0514 04:54:17.913245 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:17.913331 kubelet[2650]: E0514 04:54:17.913305 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:54:17.939441 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:44050.service - OpenSSH per-connection server daemon (10.0.0.1:44050). May 14 04:54:17.996655 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 44050 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:17.998090 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:18.003074 systemd-logind[1481]: New session 11 of user core. May 14 04:54:18.013355 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 04:54:18.147539 sshd[4026]: Connection closed by 10.0.0.1 port 44050 May 14 04:54:18.148212 sshd-session[4024]: pam_unix(sshd:session): session closed for user core May 14 04:54:18.152109 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. May 14 04:54:18.152416 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:44050.service: Deactivated successfully. May 14 04:54:18.155521 systemd[1]: session-11.scope: Deactivated successfully. May 14 04:54:18.157010 systemd-logind[1481]: Removed session 11. May 14 04:54:23.168108 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:55026.service - OpenSSH per-connection server daemon (10.0.0.1:55026). May 14 04:54:23.237684 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 55026 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:23.238995 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:23.242868 systemd-logind[1481]: New session 12 of user core. May 14 04:54:23.256328 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 04:54:23.375915 sshd[4045]: Connection closed by 10.0.0.1 port 55026 May 14 04:54:23.376496 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 14 04:54:23.379237 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:55026.service: Deactivated successfully. May 14 04:54:23.380852 systemd[1]: session-12.scope: Deactivated successfully. May 14 04:54:23.382416 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. May 14 04:54:23.386694 systemd-logind[1481]: Removed session 12. May 14 04:54:28.399558 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:55028.service - OpenSSH per-connection server daemon (10.0.0.1:55028). May 14 04:54:28.467681 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 55028 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:28.468875 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:28.472549 systemd-logind[1481]: New session 13 of user core. May 14 04:54:28.490332 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 04:54:28.602958 sshd[4061]: Connection closed by 10.0.0.1 port 55028 May 14 04:54:28.603371 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 14 04:54:28.616369 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:55028.service: Deactivated successfully. May 14 04:54:28.618644 systemd[1]: session-13.scope: Deactivated successfully. May 14 04:54:28.619304 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. May 14 04:54:28.622425 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:55030.service - OpenSSH per-connection server daemon (10.0.0.1:55030). May 14 04:54:28.623389 systemd-logind[1481]: Removed session 13. May 14 04:54:28.675221 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 55030 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:28.676419 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:28.681030 systemd-logind[1481]: New session 14 of user core. May 14 04:54:28.698330 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 04:54:28.852130 sshd[4078]: Connection closed by 10.0.0.1 port 55030 May 14 04:54:28.852565 sshd-session[4076]: pam_unix(sshd:session): session closed for user core May 14 04:54:28.865225 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:55030.service: Deactivated successfully. May 14 04:54:28.867597 systemd[1]: session-14.scope: Deactivated successfully. May 14 04:54:28.869268 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. May 14 04:54:28.873699 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). May 14 04:54:28.877199 systemd-logind[1481]: Removed session 14. May 14 04:54:28.934716 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:28.935924 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:28.941137 systemd-logind[1481]: New session 15 of user core. May 14 04:54:28.954309 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 04:54:29.064378 sshd[4091]: Connection closed by 10.0.0.1 port 55032 May 14 04:54:29.064705 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 14 04:54:29.068560 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:55032.service: Deactivated successfully. May 14 04:54:29.070338 systemd[1]: session-15.scope: Deactivated successfully. May 14 04:54:29.072126 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. May 14 04:54:29.073013 systemd-logind[1481]: Removed session 15. May 14 04:54:34.077336 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:42642.service - OpenSSH per-connection server daemon (10.0.0.1:42642). May 14 04:54:34.130507 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 42642 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:34.131703 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:34.135452 systemd-logind[1481]: New session 16 of user core. May 14 04:54:34.147321 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 04:54:34.259008 sshd[4108]: Connection closed by 10.0.0.1 port 42642 May 14 04:54:34.259350 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 14 04:54:34.262707 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:42642.service: Deactivated successfully. May 14 04:54:34.264341 systemd[1]: session-16.scope: Deactivated successfully. May 14 04:54:34.265046 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. May 14 04:54:34.266841 systemd-logind[1481]: Removed session 16. May 14 04:54:39.271670 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:42654.service - OpenSSH per-connection server daemon (10.0.0.1:42654). May 14 04:54:39.338793 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 42654 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:39.340202 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:39.344158 systemd-logind[1481]: New session 17 of user core. May 14 04:54:39.353363 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 04:54:39.463825 sshd[4123]: Connection closed by 10.0.0.1 port 42654 May 14 04:54:39.464195 sshd-session[4121]: pam_unix(sshd:session): session closed for user core May 14 04:54:39.480334 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:42654.service: Deactivated successfully. May 14 04:54:39.482752 systemd[1]: session-17.scope: Deactivated successfully. May 14 04:54:39.483786 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. May 14 04:54:39.486844 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:42670.service - OpenSSH per-connection server daemon (10.0.0.1:42670). May 14 04:54:39.487953 systemd-logind[1481]: Removed session 17. May 14 04:54:39.542430 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 42670 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:39.543669 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:39.548131 systemd-logind[1481]: New session 18 of user core. May 14 04:54:39.558365 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 04:54:39.787950 sshd[4139]: Connection closed by 10.0.0.1 port 42670 May 14 04:54:39.788702 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 14 04:54:39.797143 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:42670.service: Deactivated successfully. May 14 04:54:39.799708 systemd[1]: session-18.scope: Deactivated successfully. May 14 04:54:39.800618 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. May 14 04:54:39.803449 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:42678.service - OpenSSH per-connection server daemon (10.0.0.1:42678). May 14 04:54:39.805553 systemd-logind[1481]: Removed session 18. May 14 04:54:39.870227 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 42678 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:39.870813 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:39.875477 systemd-logind[1481]: New session 19 of user core. May 14 04:54:39.883349 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 04:54:40.614898 sshd[4152]: Connection closed by 10.0.0.1 port 42678 May 14 04:54:40.615316 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 14 04:54:40.628940 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:42678.service: Deactivated successfully. May 14 04:54:40.632100 systemd[1]: session-19.scope: Deactivated successfully. May 14 04:54:40.636220 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. May 14 04:54:40.642053 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:42690.service - OpenSSH per-connection server daemon (10.0.0.1:42690). May 14 04:54:40.643248 systemd-logind[1481]: Removed session 19. May 14 04:54:40.697969 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 42690 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:40.699742 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:40.706596 systemd-logind[1481]: New session 20 of user core. May 14 04:54:40.716414 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 04:54:40.948887 sshd[4174]: Connection closed by 10.0.0.1 port 42690 May 14 04:54:40.949668 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 14 04:54:40.962003 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:42690.service: Deactivated successfully. May 14 04:54:40.970501 systemd[1]: session-20.scope: Deactivated successfully. May 14 04:54:40.972464 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. May 14 04:54:40.975559 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:42694.service - OpenSSH per-connection server daemon (10.0.0.1:42694). May 14 04:54:40.977394 systemd-logind[1481]: Removed session 20. May 14 04:54:41.038681 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 42694 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:41.040112 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:41.048080 systemd-logind[1481]: New session 21 of user core. May 14 04:54:41.062367 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 04:54:41.180012 sshd[4188]: Connection closed by 10.0.0.1 port 42694 May 14 04:54:41.180387 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 14 04:54:41.184087 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:42694.service: Deactivated successfully. May 14 04:54:41.186896 systemd[1]: session-21.scope: Deactivated successfully. May 14 04:54:41.187883 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. May 14 04:54:41.189097 systemd-logind[1481]: Removed session 21. May 14 04:54:46.201021 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:37736.service - OpenSSH per-connection server daemon (10.0.0.1:37736). May 14 04:54:46.278932 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 37736 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:46.280328 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:46.285175 systemd-logind[1481]: New session 22 of user core. May 14 04:54:46.295368 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 04:54:46.422210 sshd[4205]: Connection closed by 10.0.0.1 port 37736 May 14 04:54:46.422410 sshd-session[4203]: pam_unix(sshd:session): session closed for user core May 14 04:54:46.426443 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:37736.service: Deactivated successfully. May 14 04:54:46.428777 systemd[1]: session-22.scope: Deactivated successfully. May 14 04:54:46.429920 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. May 14 04:54:46.432616 systemd-logind[1481]: Removed session 22. May 14 04:54:51.440158 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). May 14 04:54:51.499442 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:51.500816 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:51.508405 systemd-logind[1481]: New session 23 of user core. May 14 04:54:51.517254 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 04:54:51.639885 sshd[4222]: Connection closed by 10.0.0.1 port 37738 May 14 04:54:51.640358 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 14 04:54:51.643776 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:37738.service: Deactivated successfully. May 14 04:54:51.645464 systemd[1]: session-23.scope: Deactivated successfully. May 14 04:54:51.648544 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. May 14 04:54:51.652097 systemd-logind[1481]: Removed session 23. May 14 04:54:56.656778 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:40622.service - OpenSSH per-connection server daemon (10.0.0.1:40622). May 14 04:54:56.714919 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 40622 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:54:56.718040 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:54:56.726487 systemd-logind[1481]: New session 24 of user core. May 14 04:54:56.736415 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 04:54:56.857398 sshd[4240]: Connection closed by 10.0.0.1 port 40622 May 14 04:54:56.857730 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 14 04:54:56.861763 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:40622.service: Deactivated successfully. May 14 04:54:56.865524 systemd[1]: session-24.scope: Deactivated successfully. May 14 04:54:56.870823 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. May 14 04:54:56.872237 systemd-logind[1481]: Removed session 24. May 14 04:55:00.802468 kubelet[2650]: E0514 04:55:00.802422 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:01.872307 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:40634.service - OpenSSH per-connection server daemon (10.0.0.1:40634). May 14 04:55:01.933371 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 40634 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:01.934511 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:01.938987 systemd-logind[1481]: New session 25 of user core. May 14 04:55:01.944371 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 04:55:02.074688 sshd[4255]: Connection closed by 10.0.0.1 port 40634 May 14 04:55:02.074990 sshd-session[4253]: pam_unix(sshd:session): session closed for user core May 14 04:55:02.089115 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:40634.service: Deactivated successfully. May 14 04:55:02.090855 systemd[1]: session-25.scope: Deactivated successfully. May 14 04:55:02.092110 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. May 14 04:55:02.095411 systemd[1]: Started sshd@25-10.0.0.69:22-10.0.0.1:40650.service - OpenSSH per-connection server daemon (10.0.0.1:40650). May 14 04:55:02.096214 systemd-logind[1481]: Removed session 25. May 14 04:55:02.151222 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 40650 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:02.152277 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:02.155865 systemd-logind[1481]: New session 26 of user core. May 14 04:55:02.169308 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 04:55:04.107190 containerd[1504]: time="2025-05-14T04:55:04.107133130Z" level=info msg="StopContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" with timeout 30 (s)" May 14 04:55:04.109192 containerd[1504]: time="2025-05-14T04:55:04.109086379Z" level=info msg="Stop container \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" with signal terminated" May 14 04:55:04.121673 systemd[1]: cri-containerd-f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9.scope: Deactivated successfully. May 14 04:55:04.123044 containerd[1504]: time="2025-05-14T04:55:04.122893836Z" level=info msg="received exit event container_id:\"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" id:\"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" pid:3424 exited_at:{seconds:1747198504 nanos:122598727}" May 14 04:55:04.123116 containerd[1504]: time="2025-05-14T04:55:04.123051951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" id:\"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" pid:3424 exited_at:{seconds:1747198504 nanos:122598727}" May 14 04:55:04.135844 containerd[1504]: time="2025-05-14T04:55:04.135776768Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 04:55:04.140626 containerd[1504]: time="2025-05-14T04:55:04.140596512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" id:\"416765068d8a49c66914881c4a3ab6850028879f5562af69c8d9479f84f9797e\" pid:4298 exited_at:{seconds:1747198504 nanos:140284243}" May 14 04:55:04.142224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9-rootfs.mount: Deactivated successfully. May 14 04:55:04.151582 containerd[1504]: time="2025-05-14T04:55:04.151554473Z" level=info msg="StopContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" with timeout 2 (s)" May 14 04:55:04.151879 containerd[1504]: time="2025-05-14T04:55:04.151855382Z" level=info msg="Stop container \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" with signal terminated" May 14 04:55:04.155884 containerd[1504]: time="2025-05-14T04:55:04.155791559Z" level=info msg="StopContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" returns successfully" May 14 04:55:04.161438 containerd[1504]: time="2025-05-14T04:55:04.161391795Z" level=info msg="StopPodSandbox for \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\"" May 14 04:55:04.161526 containerd[1504]: time="2025-05-14T04:55:04.161478432Z" level=info msg="Container to stop \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.163499 systemd-networkd[1409]: lxc_health: Link DOWN May 14 04:55:04.163512 systemd-networkd[1409]: lxc_health: Lost carrier May 14 04:55:04.171238 systemd[1]: cri-containerd-d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35.scope: Deactivated successfully. May 14 04:55:04.179437 containerd[1504]: time="2025-05-14T04:55:04.179403420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" id:\"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" pid:2859 exit_status:137 exited_at:{seconds:1747198504 nanos:178800322}" May 14 04:55:04.182401 systemd[1]: cri-containerd-8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1.scope: Deactivated successfully. May 14 04:55:04.182862 systemd[1]: cri-containerd-8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1.scope: Consumed 6.370s CPU time, 122.3M memory peak, 156K read from disk, 12.9M written to disk. May 14 04:55:04.184423 containerd[1504]: time="2025-05-14T04:55:04.184155447Z" level=info msg="received exit event container_id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" pid:3267 exited_at:{seconds:1747198504 nanos:183841338}" May 14 04:55:04.204952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35-rootfs.mount: Deactivated successfully. May 14 04:55:04.207800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1-rootfs.mount: Deactivated successfully. May 14 04:55:04.211235 containerd[1504]: time="2025-05-14T04:55:04.210999950Z" level=info msg="shim disconnected" id=d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35 namespace=k8s.io May 14 04:55:04.211235 containerd[1504]: time="2025-05-14T04:55:04.211059748Z" level=warning msg="cleaning up after shim disconnected" id=d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35 namespace=k8s.io May 14 04:55:04.211235 containerd[1504]: time="2025-05-14T04:55:04.211090067Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 04:55:04.213720 containerd[1504]: time="2025-05-14T04:55:04.213678092Z" level=info msg="StopContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" returns successfully" May 14 04:55:04.214301 containerd[1504]: time="2025-05-14T04:55:04.214276471Z" level=info msg="StopPodSandbox for \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\"" May 14 04:55:04.214367 containerd[1504]: time="2025-05-14T04:55:04.214347908Z" level=info msg="Container to stop \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.214397 containerd[1504]: time="2025-05-14T04:55:04.214366947Z" level=info msg="Container to stop \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.214397 containerd[1504]: time="2025-05-14T04:55:04.214375587Z" level=info msg="Container to stop \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.214397 containerd[1504]: time="2025-05-14T04:55:04.214383907Z" level=info msg="Container to stop \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.214397 containerd[1504]: time="2025-05-14T04:55:04.214392426Z" level=info msg="Container to stop \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 04:55:04.219302 systemd[1]: cri-containerd-0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503.scope: Deactivated successfully. May 14 04:55:04.226656 containerd[1504]: time="2025-05-14T04:55:04.226610982Z" level=info msg="received exit event sandbox_id:\"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" exit_status:137 exited_at:{seconds:1747198504 nanos:178800322}" May 14 04:55:04.227072 containerd[1504]: time="2025-05-14T04:55:04.226713498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" id:\"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" pid:3267 exited_at:{seconds:1747198504 nanos:183841338}" May 14 04:55:04.227196 containerd[1504]: time="2025-05-14T04:55:04.227158562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" id:\"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" pid:2800 exit_status:137 exited_at:{seconds:1747198504 nanos:221696401}" May 14 04:55:04.228312 containerd[1504]: time="2025-05-14T04:55:04.228273241Z" level=info msg="TearDown network for sandbox \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" successfully" May 14 04:55:04.228312 containerd[1504]: time="2025-05-14T04:55:04.228304080Z" level=info msg="StopPodSandbox for \"d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35\" returns successfully" May 14 04:55:04.228462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8749dc3a683b54a63dfbc0efbd6630b53e7f5a76a4438b64f4f6a149abb3d35-shm.mount: Deactivated successfully. May 14 04:55:04.255153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503-rootfs.mount: Deactivated successfully. May 14 04:55:04.260134 containerd[1504]: time="2025-05-14T04:55:04.259918889Z" level=info msg="shim disconnected" id=0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503 namespace=k8s.io May 14 04:55:04.260134 containerd[1504]: time="2025-05-14T04:55:04.260039565Z" level=warning msg="cleaning up after shim disconnected" id=0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503 namespace=k8s.io May 14 04:55:04.260134 containerd[1504]: time="2025-05-14T04:55:04.260068084Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 04:55:04.272756 containerd[1504]: time="2025-05-14T04:55:04.272456073Z" level=info msg="received exit event sandbox_id:\"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" exit_status:137 exited_at:{seconds:1747198504 nanos:221696401}" May 14 04:55:04.272756 containerd[1504]: time="2025-05-14T04:55:04.272638227Z" level=info msg="TearDown network for sandbox \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" successfully" May 14 04:55:04.272756 containerd[1504]: time="2025-05-14T04:55:04.272664906Z" level=info msg="StopPodSandbox for \"0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503\" returns successfully" May 14 04:55:04.299471 kubelet[2650]: I0514 04:55:04.299437 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-kernel\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.301228 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cni-path\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.301270 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-hostproc\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.299555 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.301292 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e9619e-5dda-4811-bd8c-4c40226ec37e-clustermesh-secrets\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.301309 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-cgroup\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305183 kubelet[2650]: I0514 04:55:04.301328 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-config-path\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305385 kubelet[2650]: I0514 04:55:04.301343 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-net\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305385 kubelet[2650]: I0514 04:55:04.301337 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-hostproc" (OuterVolumeSpecName: "hostproc") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305385 kubelet[2650]: I0514 04:55:04.301328 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cni-path" (OuterVolumeSpecName: "cni-path") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305385 kubelet[2650]: I0514 04:55:04.301362 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305385 kubelet[2650]: I0514 04:55:04.301364 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21fc0-710b-423a-9741-d0e509f46bc3-cilium-config-path\") pod \"04f21fc0-710b-423a-9741-d0e509f46bc3\" (UID: \"04f21fc0-710b-423a-9741-d0e509f46bc3\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301412 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsws7\" (UniqueName: \"kubernetes.io/projected/04f21fc0-710b-423a-9741-d0e509f46bc3-kube-api-access-wsws7\") pod \"04f21fc0-710b-423a-9741-d0e509f46bc3\" (UID: \"04f21fc0-710b-423a-9741-d0e509f46bc3\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301432 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-etc-cni-netd\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301449 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pss2l\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-kube-api-access-pss2l\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301466 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-lib-modules\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301481 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-bpf-maps\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305487 kubelet[2650]: I0514 04:55:04.301511 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-xtables-lock\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301529 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-run\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301548 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-hubble-tls\") pod \"79e9619e-5dda-4811-bd8c-4c40226ec37e\" (UID: \"79e9619e-5dda-4811-bd8c-4c40226ec37e\") " May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301589 2650 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301598 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301606 2650 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.301613 2650 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.305618 kubelet[2650]: I0514 04:55:04.302926 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f21fc0-710b-423a-9741-d0e509f46bc3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04f21fc0-710b-423a-9741-d0e509f46bc3" (UID: "04f21fc0-710b-423a-9741-d0e509f46bc3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 04:55:04.305751 kubelet[2650]: I0514 04:55:04.303218 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305751 kubelet[2650]: I0514 04:55:04.303253 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305751 kubelet[2650]: I0514 04:55:04.305353 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305751 kubelet[2650]: I0514 04:55:04.305394 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.305751 kubelet[2650]: I0514 04:55:04.305411 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.306266 kubelet[2650]: I0514 04:55:04.306229 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 04:55:04.315186 kubelet[2650]: I0514 04:55:04.314358 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f21fc0-710b-423a-9741-d0e509f46bc3-kube-api-access-wsws7" (OuterVolumeSpecName: "kube-api-access-wsws7") pod "04f21fc0-710b-423a-9741-d0e509f46bc3" (UID: "04f21fc0-710b-423a-9741-d0e509f46bc3"). InnerVolumeSpecName "kube-api-access-wsws7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 04:55:04.317182 kubelet[2650]: I0514 04:55:04.316370 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 04:55:04.319181 kubelet[2650]: I0514 04:55:04.317339 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-kube-api-access-pss2l" (OuterVolumeSpecName: "kube-api-access-pss2l") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "kube-api-access-pss2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 04:55:04.319303 kubelet[2650]: I0514 04:55:04.317377 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 04:55:04.319787 kubelet[2650]: I0514 04:55:04.319748 2650 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e9619e-5dda-4811-bd8c-4c40226ec37e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79e9619e-5dda-4811-bd8c-4c40226ec37e" (UID: "79e9619e-5dda-4811-bd8c-4c40226ec37e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 04:55:04.402176 kubelet[2650]: I0514 04:55:04.402129 2650 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402176 kubelet[2650]: I0514 04:55:04.402158 2650 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402191 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402203 2650 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402212 2650 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e9619e-5dda-4811-bd8c-4c40226ec37e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402220 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e9619e-5dda-4811-bd8c-4c40226ec37e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402228 2650 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402237 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04f21fc0-710b-423a-9741-d0e509f46bc3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402245 2650 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wsws7\" (UniqueName: \"kubernetes.io/projected/04f21fc0-710b-423a-9741-d0e509f46bc3-kube-api-access-wsws7\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402264 kubelet[2650]: I0514 04:55:04.402252 2650 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402425 kubelet[2650]: I0514 04:55:04.402260 2650 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pss2l\" (UniqueName: \"kubernetes.io/projected/79e9619e-5dda-4811-bd8c-4c40226ec37e-kube-api-access-pss2l\") on node \"localhost\" DevicePath \"\"" May 14 04:55:04.402425 kubelet[2650]: I0514 04:55:04.402268 2650 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e9619e-5dda-4811-bd8c-4c40226ec37e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 04:55:05.020037 kubelet[2650]: I0514 04:55:05.019950 2650 scope.go:117] "RemoveContainer" containerID="f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9" May 14 04:55:05.023352 containerd[1504]: time="2025-05-14T04:55:05.023315569Z" level=info msg="RemoveContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\"" May 14 04:55:05.029589 containerd[1504]: time="2025-05-14T04:55:05.029409329Z" level=info msg="RemoveContainer for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" returns successfully" May 14 04:55:05.029910 kubelet[2650]: I0514 04:55:05.029883 2650 scope.go:117] "RemoveContainer" containerID="f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9" May 14 04:55:05.031186 containerd[1504]: time="2025-05-14T04:55:05.030480534Z" level=error msg="ContainerStatus for \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\": not found" May 14 04:55:05.031231 systemd[1]: Removed slice kubepods-besteffort-pod04f21fc0_710b_423a_9741_d0e509f46bc3.slice - libcontainer container kubepods-besteffort-pod04f21fc0_710b_423a_9741_d0e509f46bc3.slice. May 14 04:55:05.032125 kubelet[2650]: E0514 04:55:05.032027 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\": not found" containerID="f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9" May 14 04:55:05.032289 kubelet[2650]: I0514 04:55:05.032120 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9"} err="failed to get container status \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0c5296fcd9b4025bba4c0e2a90090931f6b8c7061dc98da793796a5134c33f9\": not found" May 14 04:55:05.032321 kubelet[2650]: I0514 04:55:05.032291 2650 scope.go:117] "RemoveContainer" containerID="8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1" May 14 04:55:05.035479 systemd[1]: Removed slice kubepods-burstable-pod79e9619e_5dda_4811_bd8c_4c40226ec37e.slice - libcontainer container kubepods-burstable-pod79e9619e_5dda_4811_bd8c_4c40226ec37e.slice. May 14 04:55:05.035626 systemd[1]: kubepods-burstable-pod79e9619e_5dda_4811_bd8c_4c40226ec37e.slice: Consumed 6.504s CPU time, 122.6M memory peak, 160K read from disk, 12.9M written to disk. May 14 04:55:05.039554 containerd[1504]: time="2025-05-14T04:55:05.039524877Z" level=info msg="RemoveContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\"" May 14 04:55:05.051775 containerd[1504]: time="2025-05-14T04:55:05.051640279Z" level=info msg="RemoveContainer for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" returns successfully" May 14 04:55:05.051855 kubelet[2650]: I0514 04:55:05.051818 2650 scope.go:117] "RemoveContainer" containerID="223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356" May 14 04:55:05.053066 containerd[1504]: time="2025-05-14T04:55:05.053043353Z" level=info msg="RemoveContainer for \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\"" May 14 04:55:05.058390 containerd[1504]: time="2025-05-14T04:55:05.058359298Z" level=info msg="RemoveContainer for \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" returns successfully" May 14 04:55:05.058856 kubelet[2650]: I0514 04:55:05.058788 2650 scope.go:117] "RemoveContainer" containerID="0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a" May 14 04:55:05.062018 containerd[1504]: time="2025-05-14T04:55:05.061979859Z" level=info msg="RemoveContainer for \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\"" May 14 04:55:05.066274 containerd[1504]: time="2025-05-14T04:55:05.066242559Z" level=info msg="RemoveContainer for \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" returns successfully" May 14 04:55:05.066499 kubelet[2650]: I0514 04:55:05.066476 2650 scope.go:117] "RemoveContainer" containerID="b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2" May 14 04:55:05.067708 containerd[1504]: time="2025-05-14T04:55:05.067686711Z" level=info msg="RemoveContainer for \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\"" May 14 04:55:05.070567 containerd[1504]: time="2025-05-14T04:55:05.070491179Z" level=info msg="RemoveContainer for \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" returns successfully" May 14 04:55:05.070665 kubelet[2650]: I0514 04:55:05.070636 2650 scope.go:117] "RemoveContainer" containerID="33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca" May 14 04:55:05.072196 containerd[1504]: time="2025-05-14T04:55:05.071919612Z" level=info msg="RemoveContainer for \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\"" May 14 04:55:05.074168 containerd[1504]: time="2025-05-14T04:55:05.074131180Z" level=info msg="RemoveContainer for \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" returns successfully" May 14 04:55:05.074303 kubelet[2650]: I0514 04:55:05.074283 2650 scope.go:117] "RemoveContainer" containerID="8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1" May 14 04:55:05.074510 containerd[1504]: time="2025-05-14T04:55:05.074459169Z" level=error msg="ContainerStatus for \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\": not found" May 14 04:55:05.074915 kubelet[2650]: E0514 04:55:05.074790 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\": not found" containerID="8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1" May 14 04:55:05.074915 kubelet[2650]: I0514 04:55:05.074825 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1"} err="failed to get container status \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8da673c29d54e894bfb519a32f565022bdc867a2551a0dacede643eb7fde04c1\": not found" May 14 04:55:05.074915 kubelet[2650]: I0514 04:55:05.074844 2650 scope.go:117] "RemoveContainer" containerID="223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356" May 14 04:55:05.075186 containerd[1504]: time="2025-05-14T04:55:05.075131107Z" level=error msg="ContainerStatus for \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\": not found" May 14 04:55:05.075439 kubelet[2650]: E0514 04:55:05.075418 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\": not found" containerID="223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356" May 14 04:55:05.075502 kubelet[2650]: I0514 04:55:05.075443 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356"} err="failed to get container status \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\": rpc error: code = NotFound desc = an error occurred when try to find container \"223d4462e648531ea06d08bc7944704cd1bfa9829f502bc3e3d7051ead0c8356\": not found" May 14 04:55:05.075502 kubelet[2650]: I0514 04:55:05.075460 2650 scope.go:117] "RemoveContainer" containerID="0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a" May 14 04:55:05.075734 containerd[1504]: time="2025-05-14T04:55:05.075624091Z" level=error msg="ContainerStatus for \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\": not found" May 14 04:55:05.075763 kubelet[2650]: E0514 04:55:05.075723 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\": not found" containerID="0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a" May 14 04:55:05.075763 kubelet[2650]: I0514 04:55:05.075745 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a"} err="failed to get container status \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a58e75c16065c4460c28c0620471d1d69205a78581cb1cb76092b195a96533a\": not found" May 14 04:55:05.075763 kubelet[2650]: I0514 04:55:05.075760 2650 scope.go:117] "RemoveContainer" containerID="b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2" May 14 04:55:05.076062 containerd[1504]: time="2025-05-14T04:55:05.075999838Z" level=error msg="ContainerStatus for \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\": not found" May 14 04:55:05.076154 kubelet[2650]: E0514 04:55:05.076095 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\": not found" containerID="b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2" May 14 04:55:05.076154 kubelet[2650]: I0514 04:55:05.076125 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2"} err="failed to get container status \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0a1a6c2abdcda9895144fb2096078c0080cc8c9907e1951b377ea798bd581a2\": not found" May 14 04:55:05.076154 kubelet[2650]: I0514 04:55:05.076139 2650 scope.go:117] "RemoveContainer" containerID="33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca" May 14 04:55:05.076456 containerd[1504]: time="2025-05-14T04:55:05.076366226Z" level=error msg="ContainerStatus for \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\": not found" May 14 04:55:05.076673 kubelet[2650]: E0514 04:55:05.076649 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\": not found" containerID="33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca" May 14 04:55:05.076717 kubelet[2650]: I0514 04:55:05.076677 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca"} err="failed to get container status \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\": rpc error: code = NotFound desc = an error occurred when try to find container \"33774e25970b702175322cddfce41564f909acffec87d7849e0e1763b03a6eca\": not found" May 14 04:55:05.141997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c69b548d569d8b20a93dadf7eb2ae6a65cc4156ad65e672e8ee77ea80f71503-shm.mount: Deactivated successfully. May 14 04:55:05.142097 systemd[1]: var-lib-kubelet-pods-04f21fc0\x2d710b\x2d423a\x2d9741\x2dd0e509f46bc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwsws7.mount: Deactivated successfully. May 14 04:55:05.142149 systemd[1]: var-lib-kubelet-pods-79e9619e\x2d5dda\x2d4811\x2dbd8c\x2d4c40226ec37e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpss2l.mount: Deactivated successfully. May 14 04:55:05.142227 systemd[1]: var-lib-kubelet-pods-79e9619e\x2d5dda\x2d4811\x2dbd8c\x2d4c40226ec37e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 04:55:05.142276 systemd[1]: var-lib-kubelet-pods-79e9619e\x2d5dda\x2d4811\x2dbd8c\x2d4c40226ec37e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 04:55:05.802843 kubelet[2650]: I0514 04:55:05.802755 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f21fc0-710b-423a-9741-d0e509f46bc3" path="/var/lib/kubelet/pods/04f21fc0-710b-423a-9741-d0e509f46bc3/volumes" May 14 04:55:05.803625 kubelet[2650]: I0514 04:55:05.803599 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e9619e-5dda-4811-bd8c-4c40226ec37e" path="/var/lib/kubelet/pods/79e9619e-5dda-4811-bd8c-4c40226ec37e/volumes" May 14 04:55:05.855185 kubelet[2650]: E0514 04:55:05.855134 2650 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 04:55:06.071262 sshd[4271]: Connection closed by 10.0.0.1 port 40650 May 14 04:55:06.071499 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 14 04:55:06.086286 systemd[1]: sshd@25-10.0.0.69:22-10.0.0.1:40650.service: Deactivated successfully. May 14 04:55:06.087926 systemd[1]: session-26.scope: Deactivated successfully. May 14 04:55:06.088150 systemd[1]: session-26.scope: Consumed 1.274s CPU time, 25.3M memory peak. May 14 04:55:06.088704 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. May 14 04:55:06.091605 systemd[1]: Started sshd@26-10.0.0.69:22-10.0.0.1:50390.service - OpenSSH per-connection server daemon (10.0.0.1:50390). May 14 04:55:06.092093 systemd-logind[1481]: Removed session 26. May 14 04:55:06.150396 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 50390 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:06.151501 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:06.155108 systemd-logind[1481]: New session 27 of user core. May 14 04:55:06.161304 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 04:55:07.198184 sshd[4430]: Connection closed by 10.0.0.1 port 50390 May 14 04:55:07.200077 sshd-session[4428]: pam_unix(sshd:session): session closed for user core May 14 04:55:07.214369 systemd[1]: sshd@26-10.0.0.69:22-10.0.0.1:50390.service: Deactivated successfully. May 14 04:55:07.216452 systemd[1]: session-27.scope: Deactivated successfully. May 14 04:55:07.218282 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. May 14 04:55:07.222909 kubelet[2650]: I0514 04:55:07.222861 2650 memory_manager.go:355] "RemoveStaleState removing state" podUID="79e9619e-5dda-4811-bd8c-4c40226ec37e" containerName="cilium-agent" May 14 04:55:07.222909 kubelet[2650]: I0514 04:55:07.222893 2650 memory_manager.go:355] "RemoveStaleState removing state" podUID="04f21fc0-710b-423a-9741-d0e509f46bc3" containerName="cilium-operator" May 14 04:55:07.226198 systemd[1]: Started sshd@27-10.0.0.69:22-10.0.0.1:50396.service - OpenSSH per-connection server daemon (10.0.0.1:50396). May 14 04:55:07.227766 systemd-logind[1481]: Removed session 27. May 14 04:55:07.241461 systemd[1]: Created slice kubepods-burstable-podcddca269_de30_46c3_af37_4c87a0303483.slice - libcontainer container kubepods-burstable-podcddca269_de30_46c3_af37_4c87a0303483.slice. May 14 04:55:07.285222 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 50396 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:07.286438 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:07.290380 systemd-logind[1481]: New session 28 of user core. May 14 04:55:07.304339 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 04:55:07.320292 kubelet[2650]: I0514 04:55:07.320216 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-cni-path\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320292 kubelet[2650]: I0514 04:55:07.320254 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-xtables-lock\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320292 kubelet[2650]: I0514 04:55:07.320273 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cddca269-de30-46c3-af37-4c87a0303483-clustermesh-secrets\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320477 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-etc-cni-netd\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320502 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cddca269-de30-46c3-af37-4c87a0303483-cilium-config-path\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320517 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-bpf-maps\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320532 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-cilium-cgroup\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320551 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-host-proc-sys-net\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320640 kubelet[2650]: I0514 04:55:07.320595 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-host-proc-sys-kernel\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320781 kubelet[2650]: I0514 04:55:07.320610 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cddca269-de30-46c3-af37-4c87a0303483-cilium-ipsec-secrets\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320908 kubelet[2650]: I0514 04:55:07.320626 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-hostproc\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320908 kubelet[2650]: I0514 04:55:07.320852 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cddca269-de30-46c3-af37-4c87a0303483-hubble-tls\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320908 kubelet[2650]: I0514 04:55:07.320869 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwg6r\" (UniqueName: \"kubernetes.io/projected/cddca269-de30-46c3-af37-4c87a0303483-kube-api-access-vwg6r\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.320908 kubelet[2650]: I0514 04:55:07.320887 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-lib-modules\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.321056 kubelet[2650]: I0514 04:55:07.321042 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cddca269-de30-46c3-af37-4c87a0303483-cilium-run\") pod \"cilium-82nr7\" (UID: \"cddca269-de30-46c3-af37-4c87a0303483\") " pod="kube-system/cilium-82nr7" May 14 04:55:07.356001 sshd[4444]: Connection closed by 10.0.0.1 port 50396 May 14 04:55:07.355853 sshd-session[4442]: pam_unix(sshd:session): session closed for user core May 14 04:55:07.366511 systemd[1]: sshd@27-10.0.0.69:22-10.0.0.1:50396.service: Deactivated successfully. May 14 04:55:07.368532 systemd[1]: session-28.scope: Deactivated successfully. May 14 04:55:07.369330 systemd-logind[1481]: Session 28 logged out. Waiting for processes to exit. May 14 04:55:07.371773 systemd[1]: Started sshd@28-10.0.0.69:22-10.0.0.1:50400.service - OpenSSH per-connection server daemon (10.0.0.1:50400). May 14 04:55:07.372376 systemd-logind[1481]: Removed session 28. May 14 04:55:07.432375 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 50400 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:07.431804 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:07.439267 systemd-logind[1481]: New session 29 of user core. May 14 04:55:07.446322 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 04:55:07.547884 kubelet[2650]: E0514 04:55:07.547776 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:07.549985 containerd[1504]: time="2025-05-14T04:55:07.549862364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82nr7,Uid:cddca269-de30-46c3-af37-4c87a0303483,Namespace:kube-system,Attempt:0,}" May 14 04:55:07.566479 containerd[1504]: time="2025-05-14T04:55:07.566440011Z" level=info msg="connecting to shim ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" namespace=k8s.io protocol=ttrpc version=3 May 14 04:55:07.589353 systemd[1]: Started cri-containerd-ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb.scope - libcontainer container ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb. May 14 04:55:07.611581 containerd[1504]: time="2025-05-14T04:55:07.611536914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82nr7,Uid:cddca269-de30-46c3-af37-4c87a0303483,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\"" May 14 04:55:07.612480 kubelet[2650]: E0514 04:55:07.612153 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:07.614343 containerd[1504]: time="2025-05-14T04:55:07.614307921Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 04:55:07.623022 containerd[1504]: time="2025-05-14T04:55:07.622412270Z" level=info msg="Container 682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476: CDI devices from CRI Config.CDIDevices: []" May 14 04:55:07.627932 containerd[1504]: time="2025-05-14T04:55:07.627899366Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\"" May 14 04:55:07.629550 containerd[1504]: time="2025-05-14T04:55:07.629520684Z" level=info msg="StartContainer for \"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\"" May 14 04:55:07.630605 containerd[1504]: time="2025-05-14T04:55:07.630567297Z" level=info msg="connecting to shim 682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" protocol=ttrpc version=3 May 14 04:55:07.648395 systemd[1]: Started cri-containerd-682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476.scope - libcontainer container 682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476. May 14 04:55:07.671621 containerd[1504]: time="2025-05-14T04:55:07.671586226Z" level=info msg="StartContainer for \"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\" returns successfully" May 14 04:55:07.688490 systemd[1]: cri-containerd-682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476.scope: Deactivated successfully. May 14 04:55:07.691677 containerd[1504]: time="2025-05-14T04:55:07.690931641Z" level=info msg="received exit event container_id:\"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\" id:\"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\" pid:4520 exited_at:{seconds:1747198507 nanos:690678527}" May 14 04:55:07.692041 containerd[1504]: time="2025-05-14T04:55:07.692002533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\" id:\"682af268e94507b076a791e11f8fab54916bd3f19e695a0ded9409e20b68a476\" pid:4520 exited_at:{seconds:1747198507 nanos:690678527}" May 14 04:55:07.801544 kubelet[2650]: E0514 04:55:07.801070 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:07.802028 kubelet[2650]: E0514 04:55:07.801624 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:07.840188 kubelet[2650]: I0514 04:55:07.840001 2650 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T04:55:07Z","lastTransitionTime":"2025-05-14T04:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 04:55:08.034445 kubelet[2650]: E0514 04:55:08.034417 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:08.039463 containerd[1504]: time="2025-05-14T04:55:08.039426865Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 04:55:08.045232 containerd[1504]: time="2025-05-14T04:55:08.045196533Z" level=info msg="Container 40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d: CDI devices from CRI Config.CDIDevices: []" May 14 04:55:08.052445 containerd[1504]: time="2025-05-14T04:55:08.052363409Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\"" May 14 04:55:08.052790 containerd[1504]: time="2025-05-14T04:55:08.052763880Z" level=info msg="StartContainer for \"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\"" May 14 04:55:08.054482 containerd[1504]: time="2025-05-14T04:55:08.054457721Z" level=info msg="connecting to shim 40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" protocol=ttrpc version=3 May 14 04:55:08.078383 systemd[1]: Started cri-containerd-40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d.scope - libcontainer container 40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d. May 14 04:55:08.101125 containerd[1504]: time="2025-05-14T04:55:08.101030655Z" level=info msg="StartContainer for \"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\" returns successfully" May 14 04:55:08.118548 systemd[1]: cri-containerd-40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d.scope: Deactivated successfully. May 14 04:55:08.119016 containerd[1504]: time="2025-05-14T04:55:08.118933805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\" id:\"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\" pid:4566 exited_at:{seconds:1747198508 nanos:118703850}" May 14 04:55:08.119016 containerd[1504]: time="2025-05-14T04:55:08.119004083Z" level=info msg="received exit event container_id:\"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\" id:\"40abaa450a03ff1086d4863dd482f1f4652661bc4e0c5f626d3f91cf3b2ac42d\" pid:4566 exited_at:{seconds:1747198508 nanos:118703850}" May 14 04:55:09.038120 kubelet[2650]: E0514 04:55:09.038085 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:09.041311 containerd[1504]: time="2025-05-14T04:55:09.041268735Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 04:55:09.053712 containerd[1504]: time="2025-05-14T04:55:09.052620711Z" level=info msg="Container 55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb: CDI devices from CRI Config.CDIDevices: []" May 14 04:55:09.058707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231020492.mount: Deactivated successfully. May 14 04:55:09.068626 containerd[1504]: time="2025-05-14T04:55:09.068579475Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\"" May 14 04:55:09.069387 containerd[1504]: time="2025-05-14T04:55:09.069353580Z" level=info msg="StartContainer for \"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\"" May 14 04:55:09.070749 containerd[1504]: time="2025-05-14T04:55:09.070707353Z" level=info msg="connecting to shim 55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" protocol=ttrpc version=3 May 14 04:55:09.101327 systemd[1]: Started cri-containerd-55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb.scope - libcontainer container 55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb. May 14 04:55:09.133556 systemd[1]: cri-containerd-55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb.scope: Deactivated successfully. May 14 04:55:09.135981 containerd[1504]: time="2025-05-14T04:55:09.135910823Z" level=info msg="received exit event container_id:\"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\" id:\"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\" pid:4609 exited_at:{seconds:1747198509 nanos:135741026}" May 14 04:55:09.136260 containerd[1504]: time="2025-05-14T04:55:09.135988542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\" id:\"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\" pid:4609 exited_at:{seconds:1747198509 nanos:135741026}" May 14 04:55:09.136560 containerd[1504]: time="2025-05-14T04:55:09.136493212Z" level=info msg="StartContainer for \"55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb\" returns successfully" May 14 04:55:09.152896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55fae1e0faee0ac88271291cc7b73cae5f973dfe33f872f4cb9ff1ac1c3373eb-rootfs.mount: Deactivated successfully. May 14 04:55:10.042705 kubelet[2650]: E0514 04:55:10.042667 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:10.045193 containerd[1504]: time="2025-05-14T04:55:10.045118143Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 04:55:10.055406 containerd[1504]: time="2025-05-14T04:55:10.055363451Z" level=info msg="Container 62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d: CDI devices from CRI Config.CDIDevices: []" May 14 04:55:10.063767 containerd[1504]: time="2025-05-14T04:55:10.063734630Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\"" May 14 04:55:10.065181 containerd[1504]: time="2025-05-14T04:55:10.064264662Z" level=info msg="StartContainer for \"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\"" May 14 04:55:10.065181 containerd[1504]: time="2025-05-14T04:55:10.065038489Z" level=info msg="connecting to shim 62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" protocol=ttrpc version=3 May 14 04:55:10.088326 systemd[1]: Started cri-containerd-62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d.scope - libcontainer container 62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d. May 14 04:55:10.109993 systemd[1]: cri-containerd-62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d.scope: Deactivated successfully. May 14 04:55:10.111323 containerd[1504]: time="2025-05-14T04:55:10.111275394Z" level=info msg="received exit event container_id:\"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\" id:\"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\" pid:4649 exited_at:{seconds:1747198510 nanos:110539686}" May 14 04:55:10.111460 containerd[1504]: time="2025-05-14T04:55:10.111426871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\" id:\"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\" pid:4649 exited_at:{seconds:1747198510 nanos:110539686}" May 14 04:55:10.117549 containerd[1504]: time="2025-05-14T04:55:10.117485770Z" level=info msg="StartContainer for \"62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d\" returns successfully" May 14 04:55:10.129143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62ac0025f5a1a07bcf6b3a422b2a50afc08f9cd01b088ddd989d204587dcb91d-rootfs.mount: Deactivated successfully. May 14 04:55:10.856542 kubelet[2650]: E0514 04:55:10.856484 2650 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 04:55:11.047613 kubelet[2650]: E0514 04:55:11.047568 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:11.051409 containerd[1504]: time="2025-05-14T04:55:11.051222608Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 04:55:11.076398 containerd[1504]: time="2025-05-14T04:55:11.076366100Z" level=info msg="Container 65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114: CDI devices from CRI Config.CDIDevices: []" May 14 04:55:11.082238 containerd[1504]: time="2025-05-14T04:55:11.082207699Z" level=info msg="CreateContainer within sandbox \"ad01c2d7cf7743ea4d03353569ee1d556a4f0897b03616c58f91c6a0966b39fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\"" May 14 04:55:11.082877 containerd[1504]: time="2025-05-14T04:55:11.082845210Z" level=info msg="StartContainer for \"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\"" May 14 04:55:11.083894 containerd[1504]: time="2025-05-14T04:55:11.083860916Z" level=info msg="connecting to shim 65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114" address="unix:///run/containerd/s/954341a8f2f68223e57ce32e817b256d56a1b6778f700986a74f042556f5c35f" protocol=ttrpc version=3 May 14 04:55:11.107348 systemd[1]: Started cri-containerd-65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114.scope - libcontainer container 65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114. May 14 04:55:11.139302 containerd[1504]: time="2025-05-14T04:55:11.139258989Z" level=info msg="StartContainer for \"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" returns successfully" May 14 04:55:11.187239 containerd[1504]: time="2025-05-14T04:55:11.187197966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" id:\"a6e2ddddd2b09a3b54c0022eea1a0a4df2c922ba59a39cdf6d655af1757a80e1\" pid:4720 exited_at:{seconds:1747198511 nanos:186628334}" May 14 04:55:11.404178 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 04:55:12.054313 kubelet[2650]: E0514 04:55:12.054254 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:12.070428 kubelet[2650]: I0514 04:55:12.070369 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82nr7" podStartSLOduration=5.070353366 podStartE2EDuration="5.070353366s" podCreationTimestamp="2025-05-14 04:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:55:12.069014541 +0000 UTC m=+86.350274394" watchObservedRunningTime="2025-05-14 04:55:12.070353366 +0000 UTC m=+86.351613219" May 14 04:55:13.548053 kubelet[2650]: E0514 04:55:13.548020 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:13.923206 containerd[1504]: time="2025-05-14T04:55:13.923101589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" id:\"88417f97f7c58f367c451a15ed528b029b9626ef49ec03e37f61e7db40578e53\" pid:5132 exit_status:1 exited_at:{seconds:1747198513 nanos:922629873}" May 14 04:55:14.189832 systemd-networkd[1409]: lxc_health: Link UP May 14 04:55:14.190699 systemd-networkd[1409]: lxc_health: Gained carrier May 14 04:55:15.549086 kubelet[2650]: E0514 04:55:15.549040 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:16.055532 containerd[1504]: time="2025-05-14T04:55:16.055486172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" id:\"bb1bde2217faaf0cdb0936a5b781177baffb0e9840c600926e99bbcfaae8022a\" pid:5257 exited_at:{seconds:1747198516 nanos:55216052}" May 14 04:55:16.061656 kubelet[2650]: E0514 04:55:16.061622 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:16.158316 systemd-networkd[1409]: lxc_health: Gained IPv6LL May 14 04:55:17.063612 kubelet[2650]: E0514 04:55:17.063578 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:17.801026 kubelet[2650]: E0514 04:55:17.800979 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 04:55:18.170741 containerd[1504]: time="2025-05-14T04:55:18.170681751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" id:\"6dc81e78621fcd336140223998167c916c053f9840643f0c4ba28029e554b327\" pid:5287 exited_at:{seconds:1747198518 nanos:170131629}" May 14 04:55:20.282772 containerd[1504]: time="2025-05-14T04:55:20.282726242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65528f72f995f64051f9fac7e36294c44d29634d054cf31c290d8f9a2d858114\" id:\"bbeb9553c95e9d70866ec19e4666df9a4b9e15be1e2b536f7cfa668aec0acfb5\" pid:5319 exited_at:{seconds:1747198520 nanos:282427919}" May 14 04:55:20.285120 kubelet[2650]: E0514 04:55:20.285008 2650 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52502->127.0.0.1:33015: write tcp 127.0.0.1:52502->127.0.0.1:33015: write: broken pipe May 14 04:55:20.298211 sshd[4457]: Connection closed by 10.0.0.1 port 50400 May 14 04:55:20.298378 sshd-session[4451]: pam_unix(sshd:session): session closed for user core May 14 04:55:20.302694 systemd[1]: sshd@28-10.0.0.69:22-10.0.0.1:50400.service: Deactivated successfully. May 14 04:55:20.304542 systemd[1]: session-29.scope: Deactivated successfully. May 14 04:55:20.305231 systemd-logind[1481]: Session 29 logged out. Waiting for processes to exit. May 14 04:55:20.306709 systemd-logind[1481]: Removed session 29.