May 16 16:09:06.780380 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 16:09:06.780401 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 16 14:51:29 -00 2025 May 16 16:09:06.780410 kernel: KASLR enabled May 16 16:09:06.780416 kernel: efi: EFI v2.7 by EDK II May 16 16:09:06.780422 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 16 16:09:06.780427 kernel: random: crng init done May 16 16:09:06.780434 kernel: secureboot: Secure boot disabled May 16 16:09:06.780440 kernel: ACPI: Early table checksum verification disabled May 16 16:09:06.780446 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 16 16:09:06.780453 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 16:09:06.780464 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780479 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780487 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780493 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780504 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780513 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780519 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780526 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780532 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:06.780538 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 16:09:06.780544 kernel: ACPI: Use ACPI SPCR as default console: Yes May 16 16:09:06.780550 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:06.780557 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 16 16:09:06.780563 kernel: Zone ranges: May 16 16:09:06.780569 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:06.780576 kernel: DMA32 empty May 16 16:09:06.780582 kernel: Normal empty May 16 16:09:06.780588 kernel: Device empty May 16 16:09:06.780595 kernel: Movable zone start for each node May 16 16:09:06.780601 kernel: Early memory node ranges May 16 16:09:06.780607 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 16 16:09:06.780613 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 16 16:09:06.780619 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 16 16:09:06.780625 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 16 16:09:06.780631 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 16 16:09:06.780637 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 16 16:09:06.780644 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 16 16:09:06.780652 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 16 16:09:06.780658 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 16 16:09:06.780664 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 16:09:06.780673 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 16:09:06.780680 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 16:09:06.780686 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 16:09:06.780694 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:06.780701 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 16:09:06.780707 kernel: psci: probing for conduit method from ACPI. May 16 16:09:06.780713 kernel: psci: PSCIv1.1 detected in firmware. May 16 16:09:06.780720 kernel: psci: Using standard PSCI v0.2 function IDs May 16 16:09:06.780726 kernel: psci: Trusted OS migration not required May 16 16:09:06.780733 kernel: psci: SMC Calling Convention v1.1 May 16 16:09:06.780739 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 16:09:06.780746 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 16 16:09:06.780752 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 16 16:09:06.780760 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 16:09:06.780767 kernel: Detected PIPT I-cache on CPU0 May 16 16:09:06.780773 kernel: CPU features: detected: GIC system register CPU interface May 16 16:09:06.780779 kernel: CPU features: detected: Spectre-v4 May 16 16:09:06.780786 kernel: CPU features: detected: Spectre-BHB May 16 16:09:06.780792 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 16:09:06.780799 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 16:09:06.780805 kernel: CPU features: detected: ARM erratum 1418040 May 16 16:09:06.780811 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 16:09:06.780818 kernel: alternatives: applying boot alternatives May 16 16:09:06.780825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a0bb4243d79ba36a710f39399156a0a3ffb1b3c5e7037b80b74649cdc67b3731 May 16 16:09:06.780833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:09:06.780840 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:09:06.780846 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:09:06.780853 kernel: Fallback order for Node 0: 0 May 16 16:09:06.780859 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 16 16:09:06.780865 kernel: Policy zone: DMA May 16 16:09:06.780872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:09:06.780878 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 16 16:09:06.780885 kernel: software IO TLB: area num 4. May 16 16:09:06.780891 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 16 16:09:06.780897 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 16 16:09:06.780904 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:09:06.780912 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:09:06.780919 kernel: rcu: RCU event tracing is enabled. May 16 16:09:06.780925 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:09:06.780932 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:09:06.780942 kernel: Tracing variant of Tasks RCU enabled. May 16 16:09:06.780951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:09:06.780957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:09:06.780964 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:09:06.780971 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:09:06.780977 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 16:09:06.780983 kernel: GICv3: 256 SPIs implemented May 16 16:09:06.780992 kernel: GICv3: 0 Extended SPIs implemented May 16 16:09:06.780999 kernel: Root IRQ handler: gic_handle_irq May 16 16:09:06.781005 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 16:09:06.781023 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 16 16:09:06.781029 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 16:09:06.781036 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 16:09:06.781042 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 16 16:09:06.781049 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 16 16:09:06.781055 kernel: GICv3: using LPI property table @0x0000000040100000 May 16 16:09:06.781062 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 16 16:09:06.781068 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:09:06.781074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:06.781082 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 16:09:06.781089 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 16:09:06.781096 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 16:09:06.781102 kernel: arm-pv: using stolen time PV May 16 16:09:06.781109 kernel: Console: colour dummy device 80x25 May 16 16:09:06.781115 kernel: ACPI: Core revision 20240827 May 16 16:09:06.781122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 16:09:06.781129 kernel: pid_max: default: 32768 minimum: 301 May 16 16:09:06.781135 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:09:06.781143 kernel: landlock: Up and running. May 16 16:09:06.781149 kernel: SELinux: Initializing. May 16 16:09:06.781156 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:09:06.781162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:09:06.781169 kernel: rcu: Hierarchical SRCU implementation. May 16 16:09:06.781176 kernel: rcu: Max phase no-delay instances is 400. May 16 16:09:06.781182 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:09:06.781189 kernel: Remapping and enabling EFI services. May 16 16:09:06.781196 kernel: smp: Bringing up secondary CPUs ... May 16 16:09:06.781202 kernel: Detected PIPT I-cache on CPU1 May 16 16:09:06.781215 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 16:09:06.781222 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 16 16:09:06.781230 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:06.781237 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 16:09:06.781244 kernel: Detected PIPT I-cache on CPU2 May 16 16:09:06.781251 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 16:09:06.781258 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 16 16:09:06.781266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:06.781273 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 16:09:06.781280 kernel: Detected PIPT I-cache on CPU3 May 16 16:09:06.781286 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 16:09:06.781293 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 16 16:09:06.781300 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:06.781307 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 16:09:06.781314 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:09:06.781320 kernel: SMP: Total of 4 processors activated. May 16 16:09:06.781327 kernel: CPU: All CPU(s) started at EL1 May 16 16:09:06.781335 kernel: CPU features: detected: 32-bit EL0 Support May 16 16:09:06.781342 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 16:09:06.781349 kernel: CPU features: detected: Common not Private translations May 16 16:09:06.781356 kernel: CPU features: detected: CRC32 instructions May 16 16:09:06.781362 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 16:09:06.781369 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 16:09:06.781376 kernel: CPU features: detected: LSE atomic instructions May 16 16:09:06.781383 kernel: CPU features: detected: Privileged Access Never May 16 16:09:06.781390 kernel: CPU features: detected: RAS Extension Support May 16 16:09:06.781398 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 16:09:06.781405 kernel: alternatives: applying system-wide alternatives May 16 16:09:06.781411 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 16 16:09:06.781419 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 16 16:09:06.781426 kernel: devtmpfs: initialized May 16 16:09:06.781433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:09:06.781440 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:09:06.781447 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 16:09:06.781454 kernel: 0 pages in range for non-PLT usage May 16 16:09:06.781462 kernel: 508544 pages in range for PLT usage May 16 16:09:06.781511 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:09:06.781519 kernel: SMBIOS 3.0.0 present. May 16 16:09:06.781526 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 16:09:06.781533 kernel: DMI: Memory slots populated: 1/1 May 16 16:09:06.781540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:09:06.781547 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 16:09:06.781554 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 16:09:06.781561 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 16:09:06.781571 kernel: audit: initializing netlink subsys (disabled) May 16 16:09:06.781578 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 16 16:09:06.781585 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:09:06.781592 kernel: cpuidle: using governor menu May 16 16:09:06.781599 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 16:09:06.781606 kernel: ASID allocator initialised with 32768 entries May 16 16:09:06.781612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:09:06.781619 kernel: Serial: AMBA PL011 UART driver May 16 16:09:06.781626 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:09:06.781634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:09:06.781641 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 16:09:06.781648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 16:09:06.781655 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:09:06.781662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:09:06.781669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 16:09:06.781675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 16:09:06.781682 kernel: ACPI: Added _OSI(Module Device) May 16 16:09:06.781689 kernel: ACPI: Added _OSI(Processor Device) May 16 16:09:06.781697 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:09:06.781704 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:09:06.781711 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:09:06.781718 kernel: ACPI: Interpreter enabled May 16 16:09:06.781725 kernel: ACPI: Using GIC for interrupt routing May 16 16:09:06.781731 kernel: ACPI: MCFG table detected, 1 entries May 16 16:09:06.781738 kernel: ACPI: CPU0 has been hot-added May 16 16:09:06.781745 kernel: ACPI: CPU1 has been hot-added May 16 16:09:06.781752 kernel: ACPI: CPU2 has been hot-added May 16 16:09:06.781759 kernel: ACPI: CPU3 has been hot-added May 16 16:09:06.781767 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 16:09:06.781774 kernel: printk: legacy console [ttyAMA0] enabled May 16 16:09:06.781781 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:09:06.781900 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:09:06.781974 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 16:09:06.782035 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 16:09:06.782093 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 16:09:06.782155 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 16:09:06.782164 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 16:09:06.782171 kernel: PCI host bridge to bus 0000:00 May 16 16:09:06.782235 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 16:09:06.782292 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 16:09:06.782345 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 16:09:06.782398 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:09:06.782489 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 16 16:09:06.782561 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:09:06.782623 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 16 16:09:06.782682 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 16 16:09:06.782742 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 16 16:09:06.782801 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 16 16:09:06.782861 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 16 16:09:06.782923 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 16 16:09:06.782989 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 16:09:06.783044 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 16:09:06.783099 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 16:09:06.783108 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 16:09:06.783115 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 16:09:06.783122 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 16:09:06.783131 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 16:09:06.783138 kernel: iommu: Default domain type: Translated May 16 16:09:06.783145 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 16:09:06.783152 kernel: efivars: Registered efivars operations May 16 16:09:06.783159 kernel: vgaarb: loaded May 16 16:09:06.783166 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 16:09:06.783172 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:09:06.783180 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:09:06.783186 kernel: pnp: PnP ACPI init May 16 16:09:06.783252 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 16:09:06.783261 kernel: pnp: PnP ACPI: found 1 devices May 16 16:09:06.783269 kernel: NET: Registered PF_INET protocol family May 16 16:09:06.783276 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:09:06.783283 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:09:06.783290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:09:06.783297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:09:06.783304 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:09:06.783312 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:09:06.783319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:09:06.783326 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:09:06.783333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:09:06.783341 kernel: PCI: CLS 0 bytes, default 64 May 16 16:09:06.783347 kernel: kvm [1]: HYP mode not available May 16 16:09:06.783354 kernel: Initialise system trusted keyrings May 16 16:09:06.783361 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:09:06.783368 kernel: Key type asymmetric registered May 16 16:09:06.783376 kernel: Asymmetric key parser 'x509' registered May 16 16:09:06.783383 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 16:09:06.783390 kernel: io scheduler mq-deadline registered May 16 16:09:06.783397 kernel: io scheduler kyber registered May 16 16:09:06.783404 kernel: io scheduler bfq registered May 16 16:09:06.783411 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 16:09:06.783418 kernel: ACPI: button: Power Button [PWRB] May 16 16:09:06.783425 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 16:09:06.783500 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 16:09:06.783512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:09:06.783519 kernel: thunder_xcv, ver 1.0 May 16 16:09:06.783526 kernel: thunder_bgx, ver 1.0 May 16 16:09:06.783533 kernel: nicpf, ver 1.0 May 16 16:09:06.783540 kernel: nicvf, ver 1.0 May 16 16:09:06.783608 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 16:09:06.783665 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T16:09:06 UTC (1747411746) May 16 16:09:06.783674 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 16:09:06.783683 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 16 16:09:06.783690 kernel: watchdog: NMI not fully supported May 16 16:09:06.783697 kernel: watchdog: Hard watchdog permanently disabled May 16 16:09:06.783704 kernel: NET: Registered PF_INET6 protocol family May 16 16:09:06.783711 kernel: Segment Routing with IPv6 May 16 16:09:06.783718 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:09:06.783725 kernel: NET: Registered PF_PACKET protocol family May 16 16:09:06.783732 kernel: Key type dns_resolver registered May 16 16:09:06.783738 kernel: registered taskstats version 1 May 16 16:09:06.783745 kernel: Loading compiled-in X.509 certificates May 16 16:09:06.783754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 27b8347ec414bf9dcd45b3eefdd645a09d039333' May 16 16:09:06.783761 kernel: Demotion targets for Node 0: null May 16 16:09:06.783768 kernel: Key type .fscrypt registered May 16 16:09:06.783774 kernel: Key type fscrypt-provisioning registered May 16 16:09:06.783781 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:09:06.783788 kernel: ima: Allocated hash algorithm: sha1 May 16 16:09:06.783795 kernel: ima: No architecture policies found May 16 16:09:06.783802 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 16:09:06.783810 kernel: clk: Disabling unused clocks May 16 16:09:06.783817 kernel: PM: genpd: Disabling unused power domains May 16 16:09:06.783824 kernel: Warning: unable to open an initial console. May 16 16:09:06.783831 kernel: Freeing unused kernel memory: 39424K May 16 16:09:06.783838 kernel: Run /init as init process May 16 16:09:06.783845 kernel: with arguments: May 16 16:09:06.783852 kernel: /init May 16 16:09:06.783858 kernel: with environment: May 16 16:09:06.783865 kernel: HOME=/ May 16 16:09:06.783872 kernel: TERM=linux May 16 16:09:06.783880 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:09:06.783888 systemd[1]: Successfully made /usr/ read-only. May 16 16:09:06.783898 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:09:06.783905 systemd[1]: Detected virtualization kvm. May 16 16:09:06.783913 systemd[1]: Detected architecture arm64. May 16 16:09:06.783920 systemd[1]: Running in initrd. May 16 16:09:06.783927 systemd[1]: No hostname configured, using default hostname. May 16 16:09:06.783937 systemd[1]: Hostname set to . May 16 16:09:06.783951 systemd[1]: Initializing machine ID from VM UUID. May 16 16:09:06.783959 systemd[1]: Queued start job for default target initrd.target. May 16 16:09:06.783966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:06.783974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:06.783982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:09:06.783990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:09:06.783997 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:09:06.784007 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:09:06.784016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:09:06.784024 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:09:06.784031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:06.784039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:06.784046 systemd[1]: Reached target paths.target - Path Units. May 16 16:09:06.784054 systemd[1]: Reached target slices.target - Slice Units. May 16 16:09:06.784062 systemd[1]: Reached target swap.target - Swaps. May 16 16:09:06.784070 systemd[1]: Reached target timers.target - Timer Units. May 16 16:09:06.784077 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:09:06.784085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:09:06.784093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:09:06.784100 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:09:06.784108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:06.784115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:09:06.784124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:06.784132 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:09:06.784139 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:09:06.784147 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:09:06.784155 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:09:06.784162 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:09:06.784170 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:09:06.784178 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:09:06.784185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:09:06.784194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:06.784201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:06.784209 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:09:06.784217 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:09:06.784226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:09:06.784249 systemd-journald[242]: Collecting audit messages is disabled. May 16 16:09:06.784268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:06.784277 systemd-journald[242]: Journal started May 16 16:09:06.784296 systemd-journald[242]: Runtime Journal (/run/log/journal/6a2509cc3bf24ab0b9824633c71ac4cd) is 6M, max 48.5M, 42.4M free. May 16 16:09:06.789781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:09:06.777154 systemd-modules-load[244]: Inserted module 'overlay' May 16 16:09:06.791709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:09:06.792132 systemd-modules-load[244]: Inserted module 'br_netfilter' May 16 16:09:06.793074 kernel: Bridge firewalling registered May 16 16:09:06.795947 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:09:06.796315 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:09:06.797591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:09:06.801673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:06.803119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:09:06.814243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:09:06.815753 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:09:06.818609 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:09:06.823207 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:09:06.824386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:06.827596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:06.829576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:06.833923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:09:06.838366 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a0bb4243d79ba36a710f39399156a0a3ffb1b3c5e7037b80b74649cdc67b3731 May 16 16:09:06.873084 systemd-resolved[300]: Positive Trust Anchors: May 16 16:09:06.873100 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:09:06.873131 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:09:06.877828 systemd-resolved[300]: Defaulting to hostname 'linux'. May 16 16:09:06.878716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:09:06.884305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:06.914501 kernel: SCSI subsystem initialized May 16 16:09:06.918482 kernel: Loading iSCSI transport class v2.0-870. May 16 16:09:06.925530 kernel: iscsi: registered transport (tcp) May 16 16:09:06.937592 kernel: iscsi: registered transport (qla4xxx) May 16 16:09:06.937624 kernel: QLogic iSCSI HBA Driver May 16 16:09:06.953056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:09:06.970316 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:06.971854 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:09:07.013732 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:09:07.015928 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:09:07.077499 kernel: raid6: neonx8 gen() 15783 MB/s May 16 16:09:07.094488 kernel: raid6: neonx4 gen() 15821 MB/s May 16 16:09:07.111500 kernel: raid6: neonx2 gen() 13299 MB/s May 16 16:09:07.128501 kernel: raid6: neonx1 gen() 10466 MB/s May 16 16:09:07.145501 kernel: raid6: int64x8 gen() 6897 MB/s May 16 16:09:07.162506 kernel: raid6: int64x4 gen() 7353 MB/s May 16 16:09:07.179503 kernel: raid6: int64x2 gen() 6108 MB/s May 16 16:09:07.196500 kernel: raid6: int64x1 gen() 5056 MB/s May 16 16:09:07.196538 kernel: raid6: using algorithm neonx4 gen() 15821 MB/s May 16 16:09:07.213497 kernel: raid6: .... xor() 12325 MB/s, rmw enabled May 16 16:09:07.213516 kernel: raid6: using neon recovery algorithm May 16 16:09:07.220505 kernel: xor: measuring software checksum speed May 16 16:09:07.220545 kernel: 8regs : 21630 MB/sec May 16 16:09:07.220564 kernel: 32regs : 21710 MB/sec May 16 16:09:07.221486 kernel: arm64_neon : 28109 MB/sec May 16 16:09:07.221499 kernel: xor: using function: arm64_neon (28109 MB/sec) May 16 16:09:07.274508 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:09:07.279886 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:09:07.282289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:07.308068 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 16 16:09:07.312295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:07.314577 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:09:07.337254 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation May 16 16:09:07.358524 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:09:07.360296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:09:07.405300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:07.407863 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:09:07.454482 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 16:09:07.472577 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:09:07.472679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:09:07.472691 kernel: GPT:9289727 != 19775487 May 16 16:09:07.472700 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:09:07.472709 kernel: GPT:9289727 != 19775487 May 16 16:09:07.472717 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:09:07.472725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:07.462628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:09:07.462738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:07.464162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:07.465903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:07.493728 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:09:07.495728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:07.503398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:09:07.510509 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:09:07.521279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:09:07.522510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:09:07.531681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:09:07.532577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:09:07.534646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:07.536394 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:09:07.538971 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:09:07.540769 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:09:07.557125 disk-uuid[593]: Primary Header is updated. May 16 16:09:07.557125 disk-uuid[593]: Secondary Entries is updated. May 16 16:09:07.557125 disk-uuid[593]: Secondary Header is updated. May 16 16:09:07.559622 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:09:07.564508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:08.572503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:08.573351 disk-uuid[598]: The operation has completed successfully. May 16 16:09:08.603505 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:09:08.603593 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:09:08.623063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:09:08.644027 sh[613]: Success May 16 16:09:08.656848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:09:08.656884 kernel: device-mapper: uevent: version 1.0.3 May 16 16:09:08.657753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:09:08.665509 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 16 16:09:08.692278 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:09:08.694499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:09:08.709166 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:09:08.716817 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:09:08.716851 kernel: BTRFS: device fsid 87f734d5-e9e0-4da0-9e65-ee17bdaa6a26 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) May 16 16:09:08.717852 kernel: BTRFS info (device dm-0): first mount of filesystem 87f734d5-e9e0-4da0-9e65-ee17bdaa6a26 May 16 16:09:08.717868 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:08.719483 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:09:08.722244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:09:08.723393 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:09:08.724511 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:09:08.725201 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:09:08.726796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:09:08.743147 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (658) May 16 16:09:08.743188 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:08.743199 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:08.744111 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:08.755484 kernel: BTRFS info (device vda6): last unmount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:08.756256 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:09:08.758683 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:09:08.821551 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:09:08.826368 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:09:08.867845 systemd-networkd[798]: lo: Link UP May 16 16:09:08.867856 systemd-networkd[798]: lo: Gained carrier May 16 16:09:08.868544 systemd-networkd[798]: Enumeration completed May 16 16:09:08.868649 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:09:08.869243 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:08.869246 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:09:08.869807 systemd[1]: Reached target network.target - Network. May 16 16:09:08.870022 systemd-networkd[798]: eth0: Link UP May 16 16:09:08.870026 systemd-networkd[798]: eth0: Gained carrier May 16 16:09:08.870034 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:08.888743 ignition[709]: Ignition 2.21.0 May 16 16:09:08.888755 ignition[709]: Stage: fetch-offline May 16 16:09:08.888782 ignition[709]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:08.888790 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:08.888973 ignition[709]: parsed url from cmdline: "" May 16 16:09:08.888976 ignition[709]: no config URL provided May 16 16:09:08.888981 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:09:08.888987 ignition[709]: no config at "/usr/lib/ignition/user.ign" May 16 16:09:08.889005 ignition[709]: op(1): [started] loading QEMU firmware config module May 16 16:09:08.889009 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:09:08.895492 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:09:08.899301 ignition[709]: op(1): [finished] loading QEMU firmware config module May 16 16:09:08.935481 ignition[709]: parsing config with SHA512: 4c22e6ae52602bd490045e2ad59dbe6675367e4cb6a4a23d1b1b31e89124210ec6a113d7493e6cd07988b83c8b3feffaee125401b1ed22248241c5b465159a07 May 16 16:09:08.941641 unknown[709]: fetched base config from "system" May 16 16:09:08.941839 unknown[709]: fetched user config from "qemu" May 16 16:09:08.942392 ignition[709]: fetch-offline: fetch-offline passed May 16 16:09:08.942454 ignition[709]: Ignition finished successfully May 16 16:09:08.944354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:09:08.946104 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:09:08.946842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:09:08.980041 ignition[810]: Ignition 2.21.0 May 16 16:09:08.980056 ignition[810]: Stage: kargs May 16 16:09:08.980257 ignition[810]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:08.980267 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:08.981600 ignition[810]: kargs: kargs passed May 16 16:09:08.984532 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:09:08.981651 ignition[810]: Ignition finished successfully May 16 16:09:08.986426 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:09:09.011126 ignition[818]: Ignition 2.21.0 May 16 16:09:09.011141 ignition[818]: Stage: disks May 16 16:09:09.011257 ignition[818]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:09.011265 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:09.012508 ignition[818]: disks: disks passed May 16 16:09:09.014944 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:09:09.012560 ignition[818]: Ignition finished successfully May 16 16:09:09.016634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:09:09.018406 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:09:09.020234 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:09:09.022147 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:09:09.024201 systemd[1]: Reached target basic.target - Basic System. May 16 16:09:09.026731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:09:09.050349 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:09:09.055140 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:09:09.058898 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:09:09.133418 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:09:09.134958 kernel: EXT4-fs (vda9): mounted filesystem 0ada590e-bc2d-44be-b1f0-1b069cf0a0c5 r/w with ordered data mode. Quota mode: none. May 16 16:09:09.134663 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:09:09.136719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:09:09.138241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:09:09.139191 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:09:09.139227 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:09:09.139262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:09:09.146842 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:09:09.149938 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:09:09.152298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (836) May 16 16:09:09.155002 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:09.155037 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:09.155052 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:09.159288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:09:09.188079 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:09:09.191920 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 16 16:09:09.195176 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:09:09.198893 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:09:09.265708 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:09:09.267843 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:09:09.270578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:09:09.286838 kernel: BTRFS info (device vda6): last unmount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:09.303493 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:09:09.315192 ignition[951]: INFO : Ignition 2.21.0 May 16 16:09:09.315192 ignition[951]: INFO : Stage: mount May 16 16:09:09.316730 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:09.316730 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:09.318734 ignition[951]: INFO : mount: mount passed May 16 16:09:09.318734 ignition[951]: INFO : Ignition finished successfully May 16 16:09:09.318746 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:09:09.321189 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:09:09.845386 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:09:09.846857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:09:09.877891 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (962) May 16 16:09:09.877933 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:09.877945 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:09.878551 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:09.882566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:09:09.909328 ignition[979]: INFO : Ignition 2.21.0 May 16 16:09:09.909328 ignition[979]: INFO : Stage: files May 16 16:09:09.910934 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:09.910934 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:09.913030 ignition[979]: DEBUG : files: compiled without relabeling support, skipping May 16 16:09:09.914263 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:09:09.914263 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:09:09.917515 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:09:09.918796 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:09:09.918796 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:09:09.918130 unknown[979]: wrote ssh authorized keys file for user: core May 16 16:09:09.922593 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 16:09:09.924524 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 16 16:09:10.017081 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:09:10.164916 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 16:09:10.164916 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:09:10.168584 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 16 16:09:10.220583 systemd-networkd[798]: eth0: Gained IPv6LL May 16 16:09:10.604574 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 16:09:10.748825 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:09:10.750682 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 16:09:10.763866 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 16:09:11.627387 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 16:09:12.380577 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 16:09:12.380577 ignition[979]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 16:09:12.384259 ignition[979]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:09:12.401203 ignition[979]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:09:12.404918 ignition[979]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:09:12.406447 ignition[979]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:09:12.406447 ignition[979]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 16:09:12.406447 ignition[979]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:09:12.406447 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:09:12.406447 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:09:12.406447 ignition[979]: INFO : files: files passed May 16 16:09:12.406447 ignition[979]: INFO : Ignition finished successfully May 16 16:09:12.408288 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:09:12.414680 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:09:12.417615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:09:12.425874 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:09:12.425988 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:09:12.429056 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:09:12.430404 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:12.430404 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:12.433344 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:12.432565 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:09:12.434694 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:09:12.437547 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:09:12.475818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:09:12.475935 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:09:12.478809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:09:12.480581 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:09:12.482320 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:09:12.483153 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:09:12.513801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:09:12.516349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:09:12.536401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:12.537745 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:12.539574 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:09:12.541347 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:09:12.541489 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:09:12.543642 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:09:12.545354 systemd[1]: Stopped target basic.target - Basic System. May 16 16:09:12.546945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:09:12.548455 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:09:12.550374 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:09:12.552266 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:09:12.554058 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:09:12.555659 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:09:12.557368 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:09:12.559258 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:09:12.560826 systemd[1]: Stopped target swap.target - Swaps. May 16 16:09:12.562210 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:09:12.562347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:09:12.564292 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:12.566116 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:12.567847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:09:12.568528 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:12.570360 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:09:12.570490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:09:12.573133 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:09:12.573245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:09:12.575255 systemd[1]: Stopped target paths.target - Path Units. May 16 16:09:12.576662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:09:12.577538 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:12.579218 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:09:12.580680 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:09:12.582425 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:09:12.582532 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:09:12.583984 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:09:12.584065 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:09:12.585463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:09:12.585591 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:09:12.587221 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:09:12.587324 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:09:12.589568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:09:12.590804 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:09:12.590947 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:12.600003 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:09:12.600868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:09:12.601011 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:12.602631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:09:12.602732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:09:12.609375 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:09:12.610510 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:09:12.613595 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:09:12.615880 ignition[1035]: INFO : Ignition 2.21.0 May 16 16:09:12.615880 ignition[1035]: INFO : Stage: umount May 16 16:09:12.618610 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:12.618610 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:12.618610 ignition[1035]: INFO : umount: umount passed May 16 16:09:12.618610 ignition[1035]: INFO : Ignition finished successfully May 16 16:09:12.619167 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:09:12.619281 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:09:12.620741 systemd[1]: Stopped target network.target - Network. May 16 16:09:12.622669 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:09:12.622728 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:09:12.624410 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:09:12.624454 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:09:12.626099 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:09:12.626147 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:09:12.627830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:09:12.627872 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:09:12.629727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:09:12.631290 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:09:12.638353 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:09:12.638461 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:09:12.641607 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:09:12.641800 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:09:12.641885 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:09:12.645029 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:09:12.645545 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:09:12.647685 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:09:12.647724 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:12.650228 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:09:12.651150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:09:12.651205 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:09:12.654387 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:09:12.654430 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:12.657178 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:09:12.657219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:09:12.659097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:09:12.659139 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:12.662173 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:12.665276 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:09:12.665329 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:09:12.674680 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:09:12.674776 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:09:12.678117 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:09:12.678253 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:12.681257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:09:12.681303 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:09:12.683626 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:09:12.683661 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:12.685325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:09:12.685375 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:09:12.688657 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:09:12.688709 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:09:12.691536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:09:12.691584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:09:12.695022 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:09:12.696270 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:09:12.696324 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:12.699083 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:09:12.699123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:12.702252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:09:12.702294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:12.706307 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 16:09:12.706356 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 16:09:12.706389 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:09:12.706655 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:09:12.706732 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:09:12.708065 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:09:12.708142 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:09:12.709972 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:09:12.710046 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:09:12.712033 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:09:12.714186 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:09:12.730207 systemd[1]: Switching root. May 16 16:09:12.757392 systemd-journald[242]: Journal stopped May 16 16:09:13.528344 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). May 16 16:09:13.528393 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:09:13.528405 kernel: SELinux: policy capability open_perms=1 May 16 16:09:13.528414 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:09:13.528431 kernel: SELinux: policy capability always_check_network=0 May 16 16:09:13.528448 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:09:13.528459 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:09:13.528483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:09:13.528493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:09:13.528502 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:09:13.528511 systemd[1]: Successfully loaded SELinux policy in 51.160ms. May 16 16:09:13.528527 kernel: audit: type=1403 audit(1747411752.938:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:09:13.528537 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.335ms. May 16 16:09:13.528548 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:09:13.528559 systemd[1]: Detected virtualization kvm. May 16 16:09:13.528570 systemd[1]: Detected architecture arm64. May 16 16:09:13.528581 systemd[1]: Detected first boot. May 16 16:09:13.528590 systemd[1]: Initializing machine ID from VM UUID. May 16 16:09:13.528600 zram_generator::config[1082]: No configuration found. May 16 16:09:13.528611 kernel: NET: Registered PF_VSOCK protocol family May 16 16:09:13.528620 systemd[1]: Populated /etc with preset unit settings. May 16 16:09:13.528631 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:09:13.528641 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:09:13.528650 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:09:13.528665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:09:13.528675 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:09:13.528685 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:09:13.528695 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:09:13.528705 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:09:13.528715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:09:13.528727 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:09:13.528737 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:09:13.528748 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:09:13.528757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:13.528767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:13.528777 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:09:13.528787 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:09:13.528797 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:09:13.528807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:09:13.528818 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 16:09:13.528828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:13.528839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:13.528849 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:09:13.528863 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:09:13.528872 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:09:13.528884 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:09:13.528894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:13.528904 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:09:13.528920 systemd[1]: Reached target slices.target - Slice Units. May 16 16:09:13.528934 systemd[1]: Reached target swap.target - Swaps. May 16 16:09:13.528944 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:09:13.528954 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:09:13.528964 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:09:13.528974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:13.528984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:09:13.528994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:13.529004 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:09:13.529014 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:09:13.529025 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:09:13.529035 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:09:13.529045 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:09:13.529055 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:09:13.529064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:09:13.529075 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:09:13.529084 systemd[1]: Reached target machines.target - Containers. May 16 16:09:13.529094 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:09:13.529104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:13.529116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:09:13.529126 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:09:13.529135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:13.529145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:09:13.529156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:13.529165 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:09:13.529175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:13.529185 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:09:13.529196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:09:13.529206 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:09:13.529216 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:09:13.529225 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:09:13.529235 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:13.529245 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:09:13.529255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:09:13.529266 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:09:13.529275 kernel: ACPI: bus type drm_connector registered May 16 16:09:13.529286 kernel: fuse: init (API version 7.41) May 16 16:09:13.529295 kernel: loop: module loaded May 16 16:09:13.529304 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:09:13.529314 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:09:13.529324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:09:13.529336 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:09:13.529346 systemd[1]: Stopped verity-setup.service. May 16 16:09:13.529355 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:09:13.529365 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:09:13.529375 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:09:13.529384 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:09:13.529394 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:09:13.529404 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:09:13.529436 systemd-journald[1157]: Collecting audit messages is disabled. May 16 16:09:13.529456 systemd-journald[1157]: Journal started May 16 16:09:13.529484 systemd-journald[1157]: Runtime Journal (/run/log/journal/6a2509cc3bf24ab0b9824633c71ac4cd) is 6M, max 48.5M, 42.4M free. May 16 16:09:13.299433 systemd[1]: Queued start job for default target multi-user.target. May 16 16:09:13.326320 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:09:13.326714 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:09:13.532203 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:09:13.534072 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:09:13.534906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:13.536367 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:09:13.536586 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:09:13.538048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:13.538205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:13.539592 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:09:13.539744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:09:13.541099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:13.541267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:13.542761 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:09:13.542928 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:09:13.544218 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:13.544364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:13.547000 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:09:13.548362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:13.549902 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:09:13.551645 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:09:13.563668 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:09:13.566057 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:09:13.568091 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:09:13.569254 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:09:13.569292 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:09:13.571177 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:09:13.578203 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:09:13.579534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:13.580751 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:09:13.583621 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:09:13.584873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:09:13.589639 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:09:13.590764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:09:13.592586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:13.592946 systemd-journald[1157]: Time spent on flushing to /var/log/journal/6a2509cc3bf24ab0b9824633c71ac4cd is 13.112ms for 887 entries. May 16 16:09:13.592946 systemd-journald[1157]: System Journal (/var/log/journal/6a2509cc3bf24ab0b9824633c71ac4cd) is 8M, max 195.6M, 187.6M free. May 16 16:09:13.618243 systemd-journald[1157]: Received client request to flush runtime journal. May 16 16:09:13.596404 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:09:13.609814 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:09:13.612373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:13.614552 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:09:13.615545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:09:13.618823 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:09:13.623481 kernel: loop0: detected capacity change from 0 to 138376 May 16 16:09:13.627759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:09:13.629866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:09:13.632706 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:09:13.634262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:13.636683 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:09:13.659511 kernel: loop1: detected capacity change from 0 to 107312 May 16 16:09:13.664850 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:09:13.672014 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:09:13.678631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:09:13.696492 kernel: loop2: detected capacity change from 0 to 203944 May 16 16:09:13.711182 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 16 16:09:13.711550 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 16 16:09:13.715946 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:13.719558 kernel: loop3: detected capacity change from 0 to 138376 May 16 16:09:13.726606 kernel: loop4: detected capacity change from 0 to 107312 May 16 16:09:13.731607 kernel: loop5: detected capacity change from 0 to 203944 May 16 16:09:13.735684 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:09:13.736068 (sd-merge)[1224]: Merged extensions into '/usr'. May 16 16:09:13.739287 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:09:13.739301 systemd[1]: Reloading... May 16 16:09:13.796915 zram_generator::config[1252]: No configuration found. May 16 16:09:13.861535 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:09:13.871917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:13.933179 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:09:13.933515 systemd[1]: Reloading finished in 193 ms. May 16 16:09:13.971117 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:09:13.972722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:09:13.987669 systemd[1]: Starting ensure-sysext.service... May 16 16:09:13.989502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:09:13.999742 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... May 16 16:09:13.999758 systemd[1]: Reloading... May 16 16:09:14.006791 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:09:14.007234 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:09:14.007580 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:09:14.007860 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:09:14.008584 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:09:14.008878 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 16 16:09:14.008995 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 16 16:09:14.011519 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:09:14.011617 systemd-tmpfiles[1287]: Skipping /boot May 16 16:09:14.020399 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:09:14.020517 systemd-tmpfiles[1287]: Skipping /boot May 16 16:09:14.048491 zram_generator::config[1314]: No configuration found. May 16 16:09:14.114258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:14.175855 systemd[1]: Reloading finished in 175 ms. May 16 16:09:14.193886 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:09:14.199242 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:14.206364 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:14.208584 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:09:14.210698 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:09:14.213599 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:09:14.217950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:14.223934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:09:14.238865 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:09:14.240633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:09:14.247785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:14.252722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:14.255738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:14.257994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:14.259049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:14.259205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:14.262698 systemd-udevd[1356]: Using default interface naming scheme 'v255'. May 16 16:09:14.269374 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:09:14.273339 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:09:14.275019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:14.275175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:14.277105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:14.277254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:14.279894 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:14.281171 augenrules[1382]: No rules May 16 16:09:14.280050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:14.283336 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:14.284007 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:14.286180 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:09:14.289431 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:09:14.301958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:09:14.304061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:14.326681 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:14.327972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:14.329252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:14.341125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:09:14.344506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:14.347939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:14.349313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:14.349415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:14.352214 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:09:14.354547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:09:14.363169 augenrules[1427]: /sbin/augenrules: No change May 16 16:09:14.360879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:14.361044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:14.362797 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:09:14.363528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:09:14.365547 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:14.366372 augenrules[1452]: No rules May 16 16:09:14.366529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:14.368999 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:14.369813 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:14.379531 systemd[1]: Finished ensure-sysext.service. May 16 16:09:14.384233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:14.385532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:14.394364 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 16:09:14.405773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:09:14.405832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:09:14.408607 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:09:14.433695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:09:14.436525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:09:14.472820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:14.483696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:09:14.499758 systemd-resolved[1354]: Positive Trust Anchors: May 16 16:09:14.499772 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:09:14.499805 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:09:14.506894 systemd-resolved[1354]: Defaulting to hostname 'linux'. May 16 16:09:14.509930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:09:14.511352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:14.544981 systemd-networkd[1440]: lo: Link UP May 16 16:09:14.544990 systemd-networkd[1440]: lo: Gained carrier May 16 16:09:14.546083 systemd-networkd[1440]: Enumeration completed May 16 16:09:14.546241 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:09:14.546760 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:14.546844 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:09:14.547332 systemd-networkd[1440]: eth0: Link UP May 16 16:09:14.547438 systemd-networkd[1440]: eth0: Gained carrier May 16 16:09:14.547452 systemd[1]: Reached target network.target - Network. May 16 16:09:14.547452 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:14.549773 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:09:14.551853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:09:14.561701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:14.563128 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:09:14.564299 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:09:14.564576 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:09:14.564780 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. May 16 16:09:14.565819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:09:14.567212 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:09:14.567575 systemd-timesyncd[1468]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:09:14.567614 systemd-timesyncd[1468]: Initial clock synchronization to Fri 2025-05-16 16:09:14.712525 UTC. May 16 16:09:14.568430 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:09:14.569661 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:09:14.569702 systemd[1]: Reached target paths.target - Path Units. May 16 16:09:14.570578 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:09:14.571656 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:09:14.572735 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:09:14.573899 systemd[1]: Reached target timers.target - Timer Units. May 16 16:09:14.575699 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:09:14.577842 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:09:14.580805 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:09:14.583881 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:09:14.585110 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:09:14.594200 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:09:14.595567 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:09:14.597400 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:09:14.598769 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:09:14.600300 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:09:14.601289 systemd[1]: Reached target basic.target - Basic System. May 16 16:09:14.602253 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:09:14.602290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:09:14.603263 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:09:14.605170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:09:14.607032 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:09:14.609012 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:09:14.611078 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:09:14.612073 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:09:14.613675 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:09:14.617567 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:09:14.619621 jq[1502]: false May 16 16:09:14.619794 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:09:14.622549 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:09:14.625584 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:09:14.627524 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:09:14.627873 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:09:14.629367 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:09:14.631413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:09:14.636060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:09:14.637602 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:09:14.637766 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:09:14.639597 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:09:14.639766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:09:14.642809 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:09:14.642986 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:09:14.647502 jq[1513]: true May 16 16:09:14.653893 extend-filesystems[1503]: Found loop3 May 16 16:09:14.653893 extend-filesystems[1503]: Found loop4 May 16 16:09:14.653893 extend-filesystems[1503]: Found loop5 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda May 16 16:09:14.653893 extend-filesystems[1503]: Found vda1 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda2 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda3 May 16 16:09:14.653893 extend-filesystems[1503]: Found usr May 16 16:09:14.653893 extend-filesystems[1503]: Found vda4 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda6 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda7 May 16 16:09:14.653893 extend-filesystems[1503]: Found vda9 May 16 16:09:14.653893 extend-filesystems[1503]: Checking size of /dev/vda9 May 16 16:09:14.668411 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:09:14.671232 jq[1522]: true May 16 16:09:14.689676 extend-filesystems[1503]: Resized partition /dev/vda9 May 16 16:09:14.694275 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) May 16 16:09:14.696597 update_engine[1512]: I20250516 16:09:14.694257 1512 main.cc:92] Flatcar Update Engine starting May 16 16:09:14.696772 tar[1520]: linux-arm64/helm May 16 16:09:14.702670 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 16:09:14.714122 dbus-daemon[1500]: [system] SELinux support is enabled May 16 16:09:14.714675 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:09:14.722950 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:09:14.722980 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:09:14.723498 update_engine[1512]: I20250516 16:09:14.723338 1512 update_check_scheduler.cc:74] Next update check in 4m41s May 16 16:09:14.725124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:09:14.725144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:09:14.731247 systemd[1]: Started update-engine.service - Update Engine. May 16 16:09:14.740852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 16:09:14.735636 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:09:14.741828 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 16:09:14.741828 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 16:09:14.741828 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 16:09:14.746328 extend-filesystems[1503]: Resized filesystem in /dev/vda9 May 16 16:09:14.747153 bash[1554]: Updated "/home/core/.ssh/authorized_keys" May 16 16:09:14.742672 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:09:14.742856 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:09:14.749037 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:09:14.752065 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:09:14.765732 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) May 16 16:09:14.766298 systemd-logind[1510]: New seat seat0. May 16 16:09:14.766961 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:09:14.841395 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:09:14.932570 containerd[1531]: time="2025-05-16T16:09:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:09:14.933587 containerd[1531]: time="2025-05-16T16:09:14.933549200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:09:14.944486 containerd[1531]: time="2025-05-16T16:09:14.944432560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.48µs" May 16 16:09:14.944486 containerd[1531]: time="2025-05-16T16:09:14.944484560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:09:14.944599 containerd[1531]: time="2025-05-16T16:09:14.944504840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:09:14.944750 containerd[1531]: time="2025-05-16T16:09:14.944712120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:09:14.944802 containerd[1531]: time="2025-05-16T16:09:14.944785320Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:09:14.944835 containerd[1531]: time="2025-05-16T16:09:14.944822760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:09:14.944911 containerd[1531]: time="2025-05-16T16:09:14.944883880Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:09:14.944971 containerd[1531]: time="2025-05-16T16:09:14.944954400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:09:14.945280 containerd[1531]: time="2025-05-16T16:09:14.945249080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:09:14.945343 containerd[1531]: time="2025-05-16T16:09:14.945325280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:09:14.945364 containerd[1531]: time="2025-05-16T16:09:14.945349480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:09:14.945364 containerd[1531]: time="2025-05-16T16:09:14.945360240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:09:14.945461 containerd[1531]: time="2025-05-16T16:09:14.945447040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:09:14.945795 containerd[1531]: time="2025-05-16T16:09:14.945762560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:09:14.945820 containerd[1531]: time="2025-05-16T16:09:14.945804960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:09:14.945820 containerd[1531]: time="2025-05-16T16:09:14.945816440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:09:14.946747 containerd[1531]: time="2025-05-16T16:09:14.946709400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:09:14.947306 containerd[1531]: time="2025-05-16T16:09:14.947285320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:09:14.947391 containerd[1531]: time="2025-05-16T16:09:14.947374440Z" level=info msg="metadata content store policy set" policy=shared May 16 16:09:14.996034 containerd[1531]: time="2025-05-16T16:09:14.995974600Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:09:14.996134 containerd[1531]: time="2025-05-16T16:09:14.996109560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:09:14.996255 containerd[1531]: time="2025-05-16T16:09:14.996225480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:09:14.996282 containerd[1531]: time="2025-05-16T16:09:14.996253320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:09:14.996282 containerd[1531]: time="2025-05-16T16:09:14.996272720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:09:14.996315 containerd[1531]: time="2025-05-16T16:09:14.996284480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:09:14.996315 containerd[1531]: time="2025-05-16T16:09:14.996301680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:09:14.996357 containerd[1531]: time="2025-05-16T16:09:14.996315240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:09:14.996357 containerd[1531]: time="2025-05-16T16:09:14.996327400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:09:14.996357 containerd[1531]: time="2025-05-16T16:09:14.996337720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:09:14.996357 containerd[1531]: time="2025-05-16T16:09:14.996347800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:09:14.996418 containerd[1531]: time="2025-05-16T16:09:14.996365560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:09:14.996686 containerd[1531]: time="2025-05-16T16:09:14.996658520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:09:14.996719 containerd[1531]: time="2025-05-16T16:09:14.996692880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:09:14.996737 containerd[1531]: time="2025-05-16T16:09:14.996719960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:09:14.996737 containerd[1531]: time="2025-05-16T16:09:14.996732440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:09:14.996768 containerd[1531]: time="2025-05-16T16:09:14.996742480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:09:14.996768 containerd[1531]: time="2025-05-16T16:09:14.996752960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:09:14.996768 containerd[1531]: time="2025-05-16T16:09:14.996764320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:09:14.996820 containerd[1531]: time="2025-05-16T16:09:14.996774600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:09:14.996820 containerd[1531]: time="2025-05-16T16:09:14.996800480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:09:14.996820 containerd[1531]: time="2025-05-16T16:09:14.996812920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:09:14.997021 containerd[1531]: time="2025-05-16T16:09:14.996991240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:09:14.997272 containerd[1531]: time="2025-05-16T16:09:14.997246520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:09:14.999218 containerd[1531]: time="2025-05-16T16:09:14.997636080Z" level=info msg="Start snapshots syncer" May 16 16:09:14.999270 containerd[1531]: time="2025-05-16T16:09:14.999244520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:09:15.000138 containerd[1531]: time="2025-05-16T16:09:15.000083360Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:09:15.000250 containerd[1531]: time="2025-05-16T16:09:15.000153880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:09:15.000272 containerd[1531]: time="2025-05-16T16:09:15.000245160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:09:15.000476 containerd[1531]: time="2025-05-16T16:09:15.000426040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:09:15.000510 containerd[1531]: time="2025-05-16T16:09:15.000488546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:09:15.000510 containerd[1531]: time="2025-05-16T16:09:15.000503614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:09:15.000543 containerd[1531]: time="2025-05-16T16:09:15.000515180Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:09:15.000543 containerd[1531]: time="2025-05-16T16:09:15.000527194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:09:15.000543 containerd[1531]: time="2025-05-16T16:09:15.000539127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:09:15.000600 containerd[1531]: time="2025-05-16T16:09:15.000549471Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:09:15.000650 containerd[1531]: time="2025-05-16T16:09:15.000583069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:09:15.000673 containerd[1531]: time="2025-05-16T16:09:15.000660202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:09:15.000691 containerd[1531]: time="2025-05-16T16:09:15.000675800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:09:15.000740 containerd[1531]: time="2025-05-16T16:09:15.000727113Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:09:15.000761 containerd[1531]: time="2025-05-16T16:09:15.000746010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:09:15.000761 containerd[1531]: time="2025-05-16T16:09:15.000754929Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:09:15.000799 containerd[1531]: time="2025-05-16T16:09:15.000763970Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:09:15.000877 containerd[1531]: time="2025-05-16T16:09:15.000772074Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:09:15.000897 containerd[1531]: time="2025-05-16T16:09:15.000881665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:09:15.000897 containerd[1531]: time="2025-05-16T16:09:15.000894412Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:09:15.001026 containerd[1531]: time="2025-05-16T16:09:15.001011456Z" level=info msg="runtime interface created" May 16 16:09:15.001026 containerd[1531]: time="2025-05-16T16:09:15.001025058Z" level=info msg="created NRI interface" May 16 16:09:15.001069 containerd[1531]: time="2025-05-16T16:09:15.001035321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:09:15.001069 containerd[1531]: time="2025-05-16T16:09:15.001049534Z" level=info msg="Connect containerd service" May 16 16:09:15.001149 containerd[1531]: time="2025-05-16T16:09:15.001133387Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:09:15.002192 containerd[1531]: time="2025-05-16T16:09:15.002161736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:09:15.039509 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:09:15.062843 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:09:15.067989 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:09:15.075157 tar[1520]: linux-arm64/LICENSE May 16 16:09:15.075157 tar[1520]: linux-arm64/README.md May 16 16:09:15.088149 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:09:15.089400 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:09:15.091201 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:09:15.094630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:09:15.127654 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:09:15.131092 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131292344Z" level=info msg="Start subscribing containerd event" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131376604Z" level=info msg="Start recovering state" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131463267Z" level=info msg="Start event monitor" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131493607Z" level=info msg="Start cni network conf syncer for default" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131503056Z" level=info msg="Start streaming server" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131513440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131521871Z" level=info msg="runtime interface starting up..." May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131527694Z" level=info msg="starting plugins..." May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131541541Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131696296Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131754126Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:09:15.132455 containerd[1531]: time="2025-05-16T16:09:15.131810693Z" level=info msg="containerd successfully booted in 0.199668s" May 16 16:09:15.133203 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 16:09:15.134549 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:09:15.137024 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:09:15.917667 systemd-networkd[1440]: eth0: Gained IPv6LL May 16 16:09:15.921543 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:09:15.923234 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:09:15.925710 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:09:15.927953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:15.946229 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:09:15.960372 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:09:15.960663 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:09:15.962177 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:09:15.964431 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:09:16.495953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:16.497527 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:09:16.499968 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:09:16.503618 systemd[1]: Startup finished in 2.079s (kernel) + 6.311s (initrd) + 3.618s (userspace) = 12.010s. May 16 16:09:16.914908 kubelet[1627]: E0516 16:09:16.914791 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:09:16.917280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:09:16.917415 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:09:16.919560 systemd[1]: kubelet.service: Consumed 814ms CPU time, 258.3M memory peak. May 16 16:09:19.889989 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:09:19.891338 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:60964.service - OpenSSH per-connection server daemon (10.0.0.1:60964). May 16 16:09:19.956402 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 60964 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:19.959800 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:19.965732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:09:19.966599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:09:19.973047 systemd-logind[1510]: New session 1 of user core. May 16 16:09:19.983950 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:09:19.986737 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:09:20.022661 (systemd)[1644]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:09:20.024846 systemd-logind[1510]: New session c1 of user core. May 16 16:09:20.130657 systemd[1644]: Queued start job for default target default.target. May 16 16:09:20.153448 systemd[1644]: Created slice app.slice - User Application Slice. May 16 16:09:20.153508 systemd[1644]: Reached target paths.target - Paths. May 16 16:09:20.153553 systemd[1644]: Reached target timers.target - Timers. May 16 16:09:20.154894 systemd[1644]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:09:20.163405 systemd[1644]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:09:20.163467 systemd[1644]: Reached target sockets.target - Sockets. May 16 16:09:20.163529 systemd[1644]: Reached target basic.target - Basic System. May 16 16:09:20.163559 systemd[1644]: Reached target default.target - Main User Target. May 16 16:09:20.163585 systemd[1644]: Startup finished in 133ms. May 16 16:09:20.163673 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:09:20.164962 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:09:20.237311 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:60974.service - OpenSSH per-connection server daemon (10.0.0.1:60974). May 16 16:09:20.279341 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 60974 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:20.280494 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:20.285361 systemd-logind[1510]: New session 2 of user core. May 16 16:09:20.299629 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:09:20.350268 sshd[1657]: Connection closed by 10.0.0.1 port 60974 May 16 16:09:20.350573 sshd-session[1655]: pam_unix(sshd:session): session closed for user core May 16 16:09:20.362376 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:60974.service: Deactivated successfully. May 16 16:09:20.363710 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:09:20.364304 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. May 16 16:09:20.366583 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:60986.service - OpenSSH per-connection server daemon (10.0.0.1:60986). May 16 16:09:20.367080 systemd-logind[1510]: Removed session 2. May 16 16:09:20.413859 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 60986 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:20.414941 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:20.419098 systemd-logind[1510]: New session 3 of user core. May 16 16:09:20.429638 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:09:20.478498 sshd[1665]: Connection closed by 10.0.0.1 port 60986 May 16 16:09:20.478929 sshd-session[1663]: pam_unix(sshd:session): session closed for user core May 16 16:09:20.487464 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:60986.service: Deactivated successfully. May 16 16:09:20.488834 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:09:20.489436 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. May 16 16:09:20.491716 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:60990.service - OpenSSH per-connection server daemon (10.0.0.1:60990). May 16 16:09:20.492176 systemd-logind[1510]: Removed session 3. May 16 16:09:20.533745 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 60990 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:20.534868 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:20.539088 systemd-logind[1510]: New session 4 of user core. May 16 16:09:20.558654 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:09:20.609891 sshd[1673]: Connection closed by 10.0.0.1 port 60990 May 16 16:09:20.610241 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 16 16:09:20.620461 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:60990.service: Deactivated successfully. May 16 16:09:20.623682 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:09:20.624486 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. May 16 16:09:20.627071 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:32770.service - OpenSSH per-connection server daemon (10.0.0.1:32770). May 16 16:09:20.627554 systemd-logind[1510]: Removed session 4. May 16 16:09:20.684580 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 32770 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:20.686005 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:20.690534 systemd-logind[1510]: New session 5 of user core. May 16 16:09:20.700634 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:09:20.774210 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:09:20.774878 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:20.792195 sudo[1682]: pam_unix(sudo:session): session closed for user root May 16 16:09:20.795363 sshd[1681]: Connection closed by 10.0.0.1 port 32770 May 16 16:09:20.795740 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 16 16:09:20.806587 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:32770.service: Deactivated successfully. May 16 16:09:20.808712 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:09:20.809437 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. May 16 16:09:20.811893 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). May 16 16:09:20.812551 systemd-logind[1510]: Removed session 5. May 16 16:09:20.868656 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:20.869909 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:20.874512 systemd-logind[1510]: New session 6 of user core. May 16 16:09:20.890652 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:09:20.942579 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:09:20.943284 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:20.948114 sudo[1692]: pam_unix(sudo:session): session closed for user root May 16 16:09:20.952883 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:09:20.953181 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:20.961695 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:20.994402 augenrules[1714]: No rules May 16 16:09:20.995014 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:20.995188 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:20.997869 sudo[1691]: pam_unix(sudo:session): session closed for user root May 16 16:09:20.998952 sshd[1690]: Connection closed by 10.0.0.1 port 32776 May 16 16:09:20.999383 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 16 16:09:21.008373 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:32776.service: Deactivated successfully. May 16 16:09:21.009776 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:09:21.011219 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. May 16 16:09:21.012439 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:32790.service - OpenSSH per-connection server daemon (10.0.0.1:32790). May 16 16:09:21.013258 systemd-logind[1510]: Removed session 6. May 16 16:09:21.062087 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 32790 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:21.063196 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:21.067506 systemd-logind[1510]: New session 7 of user core. May 16 16:09:21.077648 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:09:21.127861 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:09:21.128415 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:21.486508 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:09:21.507784 (dockerd)[1746]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:09:21.764299 dockerd[1746]: time="2025-05-16T16:09:21.764171413Z" level=info msg="Starting up" May 16 16:09:21.766015 dockerd[1746]: time="2025-05-16T16:09:21.765991541Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:09:21.874789 dockerd[1746]: time="2025-05-16T16:09:21.874730780Z" level=info msg="Loading containers: start." May 16 16:09:21.882502 kernel: Initializing XFRM netlink socket May 16 16:09:22.070603 systemd-networkd[1440]: docker0: Link UP May 16 16:09:22.073414 dockerd[1746]: time="2025-05-16T16:09:22.073370671Z" level=info msg="Loading containers: done." May 16 16:09:22.084531 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck909172508-merged.mount: Deactivated successfully. May 16 16:09:22.085374 dockerd[1746]: time="2025-05-16T16:09:22.085101867Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:09:22.085374 dockerd[1746]: time="2025-05-16T16:09:22.085177280Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:09:22.085374 dockerd[1746]: time="2025-05-16T16:09:22.085272272Z" level=info msg="Initializing buildkit" May 16 16:09:22.105563 dockerd[1746]: time="2025-05-16T16:09:22.105517767Z" level=info msg="Completed buildkit initialization" May 16 16:09:22.111541 dockerd[1746]: time="2025-05-16T16:09:22.111482768Z" level=info msg="Daemon has completed initialization" May 16 16:09:22.111739 dockerd[1746]: time="2025-05-16T16:09:22.111556972Z" level=info msg="API listen on /run/docker.sock" May 16 16:09:22.111819 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:09:22.972055 containerd[1531]: time="2025-05-16T16:09:22.972009613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 16:09:23.771077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334158538.mount: Deactivated successfully. May 16 16:09:24.699259 containerd[1531]: time="2025-05-16T16:09:24.699203655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:24.699735 containerd[1531]: time="2025-05-16T16:09:24.699687116Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 16 16:09:24.700617 containerd[1531]: time="2025-05-16T16:09:24.700582449Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:24.703433 containerd[1531]: time="2025-05-16T16:09:24.703398875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:24.704271 containerd[1531]: time="2025-05-16T16:09:24.704232192Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.732177314s" May 16 16:09:24.704304 containerd[1531]: time="2025-05-16T16:09:24.704274542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 16 16:09:24.707239 containerd[1531]: time="2025-05-16T16:09:24.707110795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 16:09:25.996243 containerd[1531]: time="2025-05-16T16:09:25.996197573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:25.997215 containerd[1531]: time="2025-05-16T16:09:25.997149250Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 16 16:09:25.997773 containerd[1531]: time="2025-05-16T16:09:25.997724741Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:26.000501 containerd[1531]: time="2025-05-16T16:09:26.000459601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:26.001403 containerd[1531]: time="2025-05-16T16:09:26.001362644Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.294219845s" May 16 16:09:26.001450 containerd[1531]: time="2025-05-16T16:09:26.001402047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 16 16:09:26.002032 containerd[1531]: time="2025-05-16T16:09:26.001827575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 16:09:27.015300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:09:27.017888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:27.188851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:27.192849 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:09:27.316378 containerd[1531]: time="2025-05-16T16:09:27.316238148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:27.316903 containerd[1531]: time="2025-05-16T16:09:27.316872456Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 16 16:09:27.318363 containerd[1531]: time="2025-05-16T16:09:27.318281461Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:27.320611 containerd[1531]: time="2025-05-16T16:09:27.320582954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:27.321566 containerd[1531]: time="2025-05-16T16:09:27.321543086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.319682941s" May 16 16:09:27.321653 containerd[1531]: time="2025-05-16T16:09:27.321640280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 16 16:09:27.322134 containerd[1531]: time="2025-05-16T16:09:27.322104208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 16:09:27.329120 kubelet[2027]: E0516 16:09:27.329079 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:09:27.332008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:09:27.332144 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:09:27.333576 systemd[1]: kubelet.service: Consumed 150ms CPU time, 107.3M memory peak. May 16 16:09:28.459413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922195352.mount: Deactivated successfully. May 16 16:09:28.767508 containerd[1531]: time="2025-05-16T16:09:28.767439131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:28.768119 containerd[1531]: time="2025-05-16T16:09:28.768070381Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 16 16:09:28.768722 containerd[1531]: time="2025-05-16T16:09:28.768686583Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:28.770577 containerd[1531]: time="2025-05-16T16:09:28.770543416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:28.771655 containerd[1531]: time="2025-05-16T16:09:28.771619122Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.449464013s" May 16 16:09:28.771690 containerd[1531]: time="2025-05-16T16:09:28.771657243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 16:09:28.772105 containerd[1531]: time="2025-05-16T16:09:28.772076298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 16:09:29.545734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975266212.mount: Deactivated successfully. May 16 16:09:30.237821 containerd[1531]: time="2025-05-16T16:09:30.237654576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:30.238479 containerd[1531]: time="2025-05-16T16:09:30.238436283Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 16 16:09:30.239400 containerd[1531]: time="2025-05-16T16:09:30.239363506Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:30.242091 containerd[1531]: time="2025-05-16T16:09:30.242039275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:30.244066 containerd[1531]: time="2025-05-16T16:09:30.243980411Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.471852917s" May 16 16:09:30.244066 containerd[1531]: time="2025-05-16T16:09:30.244015937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 16:09:30.244599 containerd[1531]: time="2025-05-16T16:09:30.244575824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:09:30.908611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989443163.mount: Deactivated successfully. May 16 16:09:30.913420 containerd[1531]: time="2025-05-16T16:09:30.913371290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:09:30.914300 containerd[1531]: time="2025-05-16T16:09:30.914264509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 16:09:30.914890 containerd[1531]: time="2025-05-16T16:09:30.914858198Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:09:30.916891 containerd[1531]: time="2025-05-16T16:09:30.916857596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:09:30.917436 containerd[1531]: time="2025-05-16T16:09:30.917384723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 672.782234ms" May 16 16:09:30.917436 containerd[1531]: time="2025-05-16T16:09:30.917422014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 16:09:30.917919 containerd[1531]: time="2025-05-16T16:09:30.917867180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 16:09:31.508820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909137580.mount: Deactivated successfully. May 16 16:09:33.442358 containerd[1531]: time="2025-05-16T16:09:33.442306718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:33.443042 containerd[1531]: time="2025-05-16T16:09:33.443007184Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 16 16:09:33.444088 containerd[1531]: time="2025-05-16T16:09:33.444053896Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:33.447628 containerd[1531]: time="2025-05-16T16:09:33.447594568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:33.448749 containerd[1531]: time="2025-05-16T16:09:33.448721251Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.530828095s" May 16 16:09:33.448805 containerd[1531]: time="2025-05-16T16:09:33.448756549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 16 16:09:37.516043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 16:09:37.517819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:37.639607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:37.642683 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:09:37.675627 kubelet[2186]: E0516 16:09:37.675563 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:09:37.677960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:09:37.678073 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:09:37.678720 systemd[1]: kubelet.service: Consumed 123ms CPU time, 106.9M memory peak. May 16 16:09:38.691718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:38.691862 systemd[1]: kubelet.service: Consumed 123ms CPU time, 106.9M memory peak. May 16 16:09:38.693848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:38.712402 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... May 16 16:09:38.712418 systemd[1]: Reloading... May 16 16:09:38.786785 zram_generator::config[2243]: No configuration found. May 16 16:09:38.870345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:38.953816 systemd[1]: Reloading finished in 241 ms. May 16 16:09:38.985356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:38.987929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:38.988588 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:09:38.988805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:38.988841 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.3M memory peak. May 16 16:09:38.990387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:39.105931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:39.108987 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:09:39.142329 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:09:39.142329 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:09:39.142329 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:09:39.142640 kubelet[2290]: I0516 16:09:39.142386 2290 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:09:39.595346 kubelet[2290]: I0516 16:09:39.595311 2290 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:09:39.595346 kubelet[2290]: I0516 16:09:39.595341 2290 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:09:39.595648 kubelet[2290]: I0516 16:09:39.595633 2290 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:09:39.635270 kubelet[2290]: E0516 16:09:39.635246 2290 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 16 16:09:39.636370 kubelet[2290]: I0516 16:09:39.636349 2290 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:09:39.645666 kubelet[2290]: I0516 16:09:39.645589 2290 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:09:39.649043 kubelet[2290]: I0516 16:09:39.649025 2290 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:09:39.649802 kubelet[2290]: I0516 16:09:39.649780 2290 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:09:39.651779 kubelet[2290]: I0516 16:09:39.651750 2290 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:09:39.652129 kubelet[2290]: I0516 16:09:39.651776 2290 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:09:39.652691 kubelet[2290]: I0516 16:09:39.652225 2290 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:09:39.652691 kubelet[2290]: I0516 16:09:39.652252 2290 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:09:39.652691 kubelet[2290]: I0516 16:09:39.652495 2290 state_mem.go:36] "Initialized new in-memory state store" May 16 16:09:39.655446 kubelet[2290]: I0516 16:09:39.655398 2290 kubelet.go:408] "Attempting to sync node with API server" May 16 16:09:39.655446 kubelet[2290]: I0516 16:09:39.655429 2290 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:09:39.655446 kubelet[2290]: I0516 16:09:39.655447 2290 kubelet.go:314] "Adding apiserver pod source" May 16 16:09:39.655570 kubelet[2290]: I0516 16:09:39.655457 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:09:39.658110 kubelet[2290]: W0516 16:09:39.658065 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 16 16:09:39.658235 kubelet[2290]: E0516 16:09:39.658216 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 16 16:09:39.658951 kubelet[2290]: W0516 16:09:39.658918 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 16 16:09:39.659066 kubelet[2290]: E0516 16:09:39.659040 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 16 16:09:39.659185 kubelet[2290]: I0516 16:09:39.659160 2290 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:09:39.659920 kubelet[2290]: I0516 16:09:39.659907 2290 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:09:39.660137 kubelet[2290]: W0516 16:09:39.660122 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:09:39.661212 kubelet[2290]: I0516 16:09:39.661192 2290 server.go:1274] "Started kubelet" May 16 16:09:39.661599 kubelet[2290]: I0516 16:09:39.661572 2290 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:09:39.662881 kubelet[2290]: I0516 16:09:39.662859 2290 server.go:449] "Adding debug handlers to kubelet server" May 16 16:09:39.664320 kubelet[2290]: I0516 16:09:39.664265 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:09:39.664869 kubelet[2290]: I0516 16:09:39.664553 2290 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:09:39.665837 kubelet[2290]: E0516 16:09:39.664683 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400dc163096fd4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:09:39.6611645 +0000 UTC m=+0.549367260,LastTimestamp:2025-05-16 16:09:39.6611645 +0000 UTC m=+0.549367260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:09:39.667091 kubelet[2290]: I0516 16:09:39.666268 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:09:39.667091 kubelet[2290]: I0516 16:09:39.666273 2290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:09:39.667091 kubelet[2290]: I0516 16:09:39.666353 2290 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:09:39.667091 kubelet[2290]: E0516 16:09:39.666422 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:09:39.667091 kubelet[2290]: I0516 16:09:39.666675 2290 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:09:39.667091 kubelet[2290]: W0516 16:09:39.666846 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 16 16:09:39.667091 kubelet[2290]: E0516 16:09:39.666880 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 16 16:09:39.667091 kubelet[2290]: E0516 16:09:39.667021 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" May 16 16:09:39.667091 kubelet[2290]: E0516 16:09:39.667043 2290 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:09:39.667091 kubelet[2290]: I0516 16:09:39.667099 2290 reconciler.go:26] "Reconciler: start to sync state" May 16 16:09:39.667630 kubelet[2290]: I0516 16:09:39.667611 2290 factory.go:221] Registration of the systemd container factory successfully May 16 16:09:39.667768 kubelet[2290]: I0516 16:09:39.667750 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:09:39.668923 kubelet[2290]: I0516 16:09:39.668907 2290 factory.go:221] Registration of the containerd container factory successfully May 16 16:09:39.681919 kubelet[2290]: I0516 16:09:39.681896 2290 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:09:39.681919 kubelet[2290]: I0516 16:09:39.681914 2290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:09:39.682006 kubelet[2290]: I0516 16:09:39.681930 2290 state_mem.go:36] "Initialized new in-memory state store" May 16 16:09:39.682668 kubelet[2290]: I0516 16:09:39.682630 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:09:39.683570 kubelet[2290]: I0516 16:09:39.683548 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:09:39.683570 kubelet[2290]: I0516 16:09:39.683571 2290 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:09:39.683773 kubelet[2290]: I0516 16:09:39.683586 2290 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:09:39.683773 kubelet[2290]: E0516 16:09:39.683622 2290 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:09:39.766587 kubelet[2290]: E0516 16:09:39.766542 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:09:39.783920 kubelet[2290]: E0516 16:09:39.783867 2290 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:09:39.804687 kubelet[2290]: I0516 16:09:39.804659 2290 policy_none.go:49] "None policy: Start" May 16 16:09:39.805257 kubelet[2290]: W0516 16:09:39.805187 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 16 16:09:39.805302 kubelet[2290]: E0516 16:09:39.805273 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 16 16:09:39.805649 kubelet[2290]: I0516 16:09:39.805633 2290 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:09:39.805680 kubelet[2290]: I0516 16:09:39.805660 2290 state_mem.go:35] "Initializing new in-memory state store" May 16 16:09:39.811372 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:09:39.830230 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:09:39.832848 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:09:39.849330 kubelet[2290]: I0516 16:09:39.849137 2290 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:09:39.849518 kubelet[2290]: I0516 16:09:39.849345 2290 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:09:39.849518 kubelet[2290]: I0516 16:09:39.849358 2290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:09:39.849624 kubelet[2290]: I0516 16:09:39.849588 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:09:39.850722 kubelet[2290]: E0516 16:09:39.850685 2290 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:09:39.867586 kubelet[2290]: E0516 16:09:39.867554 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" May 16 16:09:39.950809 kubelet[2290]: I0516 16:09:39.950757 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:09:39.951208 kubelet[2290]: E0516 16:09:39.951168 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 16 16:09:39.991338 systemd[1]: Created slice kubepods-burstable-podeba576150e6d915208731a254b2fda06.slice - libcontainer container kubepods-burstable-podeba576150e6d915208731a254b2fda06.slice. May 16 16:09:40.010024 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 16:09:40.032948 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 16:09:40.068664 kubelet[2290]: I0516 16:09:40.068623 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:09:40.068664 kubelet[2290]: I0516 16:09:40.068656 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:40.068791 kubelet[2290]: I0516 16:09:40.068676 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:40.068791 kubelet[2290]: I0516 16:09:40.068699 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:40.068791 kubelet[2290]: I0516 16:09:40.068714 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:40.068791 kubelet[2290]: I0516 16:09:40.068729 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:40.068791 kubelet[2290]: I0516 16:09:40.068744 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:40.068949 kubelet[2290]: I0516 16:09:40.068758 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:40.068949 kubelet[2290]: I0516 16:09:40.068772 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:40.153110 kubelet[2290]: I0516 16:09:40.153027 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:09:40.153539 kubelet[2290]: E0516 16:09:40.153507 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 16 16:09:40.268654 kubelet[2290]: E0516 16:09:40.268595 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" May 16 16:09:40.308973 kubelet[2290]: E0516 16:09:40.308887 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.309554 containerd[1531]: time="2025-05-16T16:09:40.309517140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eba576150e6d915208731a254b2fda06,Namespace:kube-system,Attempt:0,}" May 16 16:09:40.325410 containerd[1531]: time="2025-05-16T16:09:40.325367213Z" level=info msg="connecting to shim 4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1" address="unix:///run/containerd/s/5f00ce9b610dbe55b9bc33a848797138b45d722754846fa69fe183f838ac2bbf" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:40.331316 kubelet[2290]: E0516 16:09:40.331290 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.331936 containerd[1531]: time="2025-05-16T16:09:40.331890207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 16:09:40.335213 kubelet[2290]: E0516 16:09:40.335187 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.335712 containerd[1531]: time="2025-05-16T16:09:40.335677362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 16:09:40.347964 systemd[1]: Started cri-containerd-4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1.scope - libcontainer container 4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1. May 16 16:09:40.354081 containerd[1531]: time="2025-05-16T16:09:40.354000625Z" level=info msg="connecting to shim 90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3" address="unix:///run/containerd/s/20ab117943cb680c0027dcafb0e22acb050af7ea4ce5598c5ea3cba4b877cc2d" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:40.366204 containerd[1531]: time="2025-05-16T16:09:40.366164447Z" level=info msg="connecting to shim d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31" address="unix:///run/containerd/s/f86031fb1ef8301f0449a6616a361fd1ea7fc0feb81a8742a75941d8ebf287bc" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:40.377792 systemd[1]: Started cri-containerd-90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3.scope - libcontainer container 90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3. May 16 16:09:40.385201 systemd[1]: Started cri-containerd-d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31.scope - libcontainer container d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31. May 16 16:09:40.391719 containerd[1531]: time="2025-05-16T16:09:40.391664125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eba576150e6d915208731a254b2fda06,Namespace:kube-system,Attempt:0,} returns sandbox id \"4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1\"" May 16 16:09:40.396960 kubelet[2290]: E0516 16:09:40.396929 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.399498 containerd[1531]: time="2025-05-16T16:09:40.399125403Z" level=info msg="CreateContainer within sandbox \"4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:09:40.406698 containerd[1531]: time="2025-05-16T16:09:40.406610416Z" level=info msg="Container 74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:40.414376 containerd[1531]: time="2025-05-16T16:09:40.414243004Z" level=info msg="CreateContainer within sandbox \"4999b36c004582a036a8a6a0f3fc108250996796e0af90c362d007b6ad67afc1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab\"" May 16 16:09:40.414911 containerd[1531]: time="2025-05-16T16:09:40.414886978Z" level=info msg="StartContainer for \"74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab\"" May 16 16:09:40.416038 containerd[1531]: time="2025-05-16T16:09:40.416009900Z" level=info msg="connecting to shim 74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab" address="unix:///run/containerd/s/5f00ce9b610dbe55b9bc33a848797138b45d722754846fa69fe183f838ac2bbf" protocol=ttrpc version=3 May 16 16:09:40.419961 containerd[1531]: time="2025-05-16T16:09:40.419850530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3\"" May 16 16:09:40.421349 kubelet[2290]: E0516 16:09:40.421322 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.423352 containerd[1531]: time="2025-05-16T16:09:40.423316038Z" level=info msg="CreateContainer within sandbox \"90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:09:40.423792 containerd[1531]: time="2025-05-16T16:09:40.423736509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31\"" May 16 16:09:40.424648 kubelet[2290]: E0516 16:09:40.424623 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.426580 containerd[1531]: time="2025-05-16T16:09:40.426545155Z" level=info msg="CreateContainer within sandbox \"d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:09:40.430931 containerd[1531]: time="2025-05-16T16:09:40.430895432Z" level=info msg="Container ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:40.434647 containerd[1531]: time="2025-05-16T16:09:40.434615584Z" level=info msg="Container 4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:40.438546 containerd[1531]: time="2025-05-16T16:09:40.438503404Z" level=info msg="CreateContainer within sandbox \"90f58e3ce3b0ab571ebba388632fb115721661599682c393f662400a5e8575d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7\"" May 16 16:09:40.439148 containerd[1531]: time="2025-05-16T16:09:40.438861275Z" level=info msg="StartContainer for \"ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7\"" May 16 16:09:40.439927 containerd[1531]: time="2025-05-16T16:09:40.439888175Z" level=info msg="connecting to shim ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7" address="unix:///run/containerd/s/20ab117943cb680c0027dcafb0e22acb050af7ea4ce5598c5ea3cba4b877cc2d" protocol=ttrpc version=3 May 16 16:09:40.440177 systemd[1]: Started cri-containerd-74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab.scope - libcontainer container 74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab. May 16 16:09:40.441714 containerd[1531]: time="2025-05-16T16:09:40.441656512Z" level=info msg="CreateContainer within sandbox \"d6ef59ad2147f894311be38173825486f2d3299f902996230bfa15e2b6ec0f31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175\"" May 16 16:09:40.442067 containerd[1531]: time="2025-05-16T16:09:40.441971274Z" level=info msg="StartContainer for \"4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175\"" May 16 16:09:40.442923 containerd[1531]: time="2025-05-16T16:09:40.442889905Z" level=info msg="connecting to shim 4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175" address="unix:///run/containerd/s/f86031fb1ef8301f0449a6616a361fd1ea7fc0feb81a8742a75941d8ebf287bc" protocol=ttrpc version=3 May 16 16:09:40.466643 systemd[1]: Started cri-containerd-4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175.scope - libcontainer container 4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175. May 16 16:09:40.468291 systemd[1]: Started cri-containerd-ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7.scope - libcontainer container ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7. May 16 16:09:40.483438 containerd[1531]: time="2025-05-16T16:09:40.483375259Z" level=info msg="StartContainer for \"74b3a64555d0186f143e340afcbb3b6151a5960d701622802005aae497263aab\" returns successfully" May 16 16:09:40.522224 containerd[1531]: time="2025-05-16T16:09:40.522190979Z" level=info msg="StartContainer for \"4584cf96d980c0b5f22685347350530551bda0a93e74a9ea000790265af95175\" returns successfully" May 16 16:09:40.522329 containerd[1531]: time="2025-05-16T16:09:40.522303692Z" level=info msg="StartContainer for \"ed354a00ca8bb59c757334d5854d0e92ca3cf6f000952f5e43c8d3a1e512f8f7\" returns successfully" May 16 16:09:40.555040 kubelet[2290]: I0516 16:09:40.554992 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:09:40.555350 kubelet[2290]: E0516 16:09:40.555320 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 16 16:09:40.692057 kubelet[2290]: E0516 16:09:40.691227 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.694759 kubelet[2290]: E0516 16:09:40.694705 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:40.697433 kubelet[2290]: E0516 16:09:40.697412 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:41.357486 kubelet[2290]: I0516 16:09:41.357362 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:09:41.663314 kubelet[2290]: I0516 16:09:41.663186 2290 apiserver.go:52] "Watching apiserver" May 16 16:09:41.694612 kubelet[2290]: E0516 16:09:41.694559 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 16:09:41.699691 kubelet[2290]: E0516 16:09:41.699669 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:41.724978 kubelet[2290]: I0516 16:09:41.724946 2290 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:09:41.724978 kubelet[2290]: E0516 16:09:41.724977 2290 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 16:09:41.767311 kubelet[2290]: I0516 16:09:41.767270 2290 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:09:43.574255 systemd[1]: Reload requested from client PID 2566 ('systemctl') (unit session-7.scope)... May 16 16:09:43.574274 systemd[1]: Reloading... May 16 16:09:43.659516 zram_generator::config[2612]: No configuration found. May 16 16:09:43.725758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:43.821700 systemd[1]: Reloading finished in 247 ms. May 16 16:09:43.848118 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:43.866429 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:09:43.867555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:43.867615 systemd[1]: kubelet.service: Consumed 914ms CPU time, 128.3M memory peak. May 16 16:09:43.869238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:43.992212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:43.995500 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:09:44.036392 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:09:44.036392 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:09:44.036392 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:09:44.036727 kubelet[2651]: I0516 16:09:44.036450 2651 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:09:44.042491 kubelet[2651]: I0516 16:09:44.041569 2651 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:09:44.042491 kubelet[2651]: I0516 16:09:44.041595 2651 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:09:44.042491 kubelet[2651]: I0516 16:09:44.041799 2651 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:09:44.043280 kubelet[2651]: I0516 16:09:44.043242 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 16:09:44.045271 kubelet[2651]: I0516 16:09:44.045234 2651 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:09:44.048514 kubelet[2651]: I0516 16:09:44.048493 2651 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:09:44.051166 kubelet[2651]: I0516 16:09:44.051142 2651 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:09:44.051943 kubelet[2651]: I0516 16:09:44.051924 2651 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:09:44.052144 kubelet[2651]: I0516 16:09:44.052122 2651 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:09:44.052379 kubelet[2651]: I0516 16:09:44.052202 2651 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:09:44.052553 kubelet[2651]: I0516 16:09:44.052540 2651 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:09:44.052610 kubelet[2651]: I0516 16:09:44.052603 2651 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:09:44.052693 kubelet[2651]: I0516 16:09:44.052684 2651 state_mem.go:36] "Initialized new in-memory state store" May 16 16:09:44.052848 kubelet[2651]: I0516 16:09:44.052835 2651 kubelet.go:408] "Attempting to sync node with API server" May 16 16:09:44.052914 kubelet[2651]: I0516 16:09:44.052904 2651 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:09:44.052973 kubelet[2651]: I0516 16:09:44.052964 2651 kubelet.go:314] "Adding apiserver pod source" May 16 16:09:44.053025 kubelet[2651]: I0516 16:09:44.053015 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:09:44.053573 kubelet[2651]: I0516 16:09:44.053539 2651 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:09:44.054480 kubelet[2651]: I0516 16:09:44.054038 2651 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:09:44.054710 kubelet[2651]: I0516 16:09:44.054689 2651 server.go:1274] "Started kubelet" May 16 16:09:44.056022 kubelet[2651]: I0516 16:09:44.056001 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:09:44.056869 kubelet[2651]: I0516 16:09:44.056842 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:09:44.056955 kubelet[2651]: I0516 16:09:44.056928 2651 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:09:44.057434 kubelet[2651]: I0516 16:09:44.057413 2651 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:09:44.057745 kubelet[2651]: I0516 16:09:44.057726 2651 server.go:449] "Adding debug handlers to kubelet server" May 16 16:09:44.058266 kubelet[2651]: I0516 16:09:44.058222 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:09:44.058421 kubelet[2651]: I0516 16:09:44.058400 2651 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:09:44.059011 kubelet[2651]: I0516 16:09:44.058984 2651 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:09:44.059113 kubelet[2651]: I0516 16:09:44.059097 2651 reconciler.go:26] "Reconciler: start to sync state" May 16 16:09:44.059713 kubelet[2651]: E0516 16:09:44.059691 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:09:44.061068 kubelet[2651]: E0516 16:09:44.061047 2651 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:09:44.061815 kubelet[2651]: I0516 16:09:44.061791 2651 factory.go:221] Registration of the containerd container factory successfully May 16 16:09:44.061815 kubelet[2651]: I0516 16:09:44.061810 2651 factory.go:221] Registration of the systemd container factory successfully May 16 16:09:44.063540 kubelet[2651]: I0516 16:09:44.061879 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:09:44.071991 kubelet[2651]: I0516 16:09:44.071953 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:09:44.073179 kubelet[2651]: I0516 16:09:44.073159 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:09:44.073179 kubelet[2651]: I0516 16:09:44.073181 2651 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:09:44.073296 kubelet[2651]: I0516 16:09:44.073196 2651 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:09:44.073296 kubelet[2651]: E0516 16:09:44.073250 2651 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:09:44.109863 kubelet[2651]: I0516 16:09:44.109771 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:09:44.109863 kubelet[2651]: I0516 16:09:44.109787 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:09:44.109863 kubelet[2651]: I0516 16:09:44.109805 2651 state_mem.go:36] "Initialized new in-memory state store" May 16 16:09:44.109988 kubelet[2651]: I0516 16:09:44.109936 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:09:44.109988 kubelet[2651]: I0516 16:09:44.109947 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:09:44.109988 kubelet[2651]: I0516 16:09:44.109966 2651 policy_none.go:49] "None policy: Start" May 16 16:09:44.110479 kubelet[2651]: I0516 16:09:44.110445 2651 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:09:44.110651 kubelet[2651]: I0516 16:09:44.110633 2651 state_mem.go:35] "Initializing new in-memory state store" May 16 16:09:44.111109 kubelet[2651]: I0516 16:09:44.111071 2651 state_mem.go:75] "Updated machine memory state" May 16 16:09:44.115309 kubelet[2651]: I0516 16:09:44.115283 2651 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:09:44.115461 kubelet[2651]: I0516 16:09:44.115434 2651 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:09:44.115523 kubelet[2651]: I0516 16:09:44.115461 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:09:44.115669 kubelet[2651]: I0516 16:09:44.115650 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:09:44.218715 kubelet[2651]: I0516 16:09:44.218689 2651 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:09:44.224174 kubelet[2651]: I0516 16:09:44.223958 2651 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 16:09:44.224174 kubelet[2651]: I0516 16:09:44.224024 2651 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:09:44.260816 kubelet[2651]: I0516 16:09:44.260790 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:44.260882 kubelet[2651]: I0516 16:09:44.260821 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:44.260882 kubelet[2651]: I0516 16:09:44.260839 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:09:44.260882 kubelet[2651]: I0516 16:09:44.260853 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:44.260882 kubelet[2651]: I0516 16:09:44.260868 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:44.260986 kubelet[2651]: I0516 16:09:44.260886 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:44.260986 kubelet[2651]: I0516 16:09:44.260900 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:44.260986 kubelet[2651]: I0516 16:09:44.260915 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:09:44.260986 kubelet[2651]: I0516 16:09:44.260930 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eba576150e6d915208731a254b2fda06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eba576150e6d915208731a254b2fda06\") " pod="kube-system/kube-apiserver-localhost" May 16 16:09:44.483632 kubelet[2651]: E0516 16:09:44.483524 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:44.483632 kubelet[2651]: E0516 16:09:44.483532 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:44.483725 kubelet[2651]: E0516 16:09:44.483645 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:44.575160 sudo[2687]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 16:09:44.575423 sudo[2687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 16:09:45.019645 sudo[2687]: pam_unix(sudo:session): session closed for user root May 16 16:09:45.053793 kubelet[2651]: I0516 16:09:45.053754 2651 apiserver.go:52] "Watching apiserver" May 16 16:09:45.059251 kubelet[2651]: I0516 16:09:45.059197 2651 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:09:45.097691 kubelet[2651]: E0516 16:09:45.097657 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:45.098686 kubelet[2651]: E0516 16:09:45.098652 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:45.103548 kubelet[2651]: E0516 16:09:45.103520 2651 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:09:45.103761 kubelet[2651]: E0516 16:09:45.103654 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:45.137136 kubelet[2651]: I0516 16:09:45.136979 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.136962616 podStartE2EDuration="1.136962616s" podCreationTimestamp="2025-05-16 16:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:09:45.136281551 +0000 UTC m=+1.136715094" watchObservedRunningTime="2025-05-16 16:09:45.136962616 +0000 UTC m=+1.137396119" May 16 16:09:45.151155 kubelet[2651]: I0516 16:09:45.150922 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.150905936 podStartE2EDuration="1.150905936s" podCreationTimestamp="2025-05-16 16:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:09:45.143518339 +0000 UTC m=+1.143951882" watchObservedRunningTime="2025-05-16 16:09:45.150905936 +0000 UTC m=+1.151339439" May 16 16:09:45.151319 kubelet[2651]: I0516 16:09:45.151221 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.151215238 podStartE2EDuration="1.151215238s" podCreationTimestamp="2025-05-16 16:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:09:45.150842956 +0000 UTC m=+1.151276459" watchObservedRunningTime="2025-05-16 16:09:45.151215238 +0000 UTC m=+1.151648701" May 16 16:09:46.098635 kubelet[2651]: E0516 16:09:46.098604 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:46.838712 sudo[1726]: pam_unix(sudo:session): session closed for user root May 16 16:09:46.839692 sshd[1725]: Connection closed by 10.0.0.1 port 32790 May 16 16:09:46.840076 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 16 16:09:46.843172 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:32790.service: Deactivated successfully. May 16 16:09:46.845062 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:09:46.845965 systemd[1]: session-7.scope: Consumed 7.576s CPU time, 263.4M memory peak. May 16 16:09:46.847190 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. May 16 16:09:46.848587 systemd-logind[1510]: Removed session 7. May 16 16:09:46.894713 kubelet[2651]: E0516 16:09:46.894685 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:49.548447 kubelet[2651]: I0516 16:09:49.548416 2651 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:09:49.548973 containerd[1531]: time="2025-05-16T16:09:49.548764799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:09:49.549300 kubelet[2651]: I0516 16:09:49.548979 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:09:50.479150 systemd[1]: Created slice kubepods-besteffort-pod61c1f5ff_7bdd_44fd_9e9d_dffea60bed12.slice - libcontainer container kubepods-besteffort-pod61c1f5ff_7bdd_44fd_9e9d_dffea60bed12.slice. May 16 16:09:50.493786 systemd[1]: Created slice kubepods-burstable-pod553e625f_6202_4540_bd11_79ca63c5dc58.slice - libcontainer container kubepods-burstable-pod553e625f_6202_4540_bd11_79ca63c5dc58.slice. May 16 16:09:50.505390 kubelet[2651]: I0516 16:09:50.505032 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-hubble-tls\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.505390 kubelet[2651]: I0516 16:09:50.505391 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-run\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505420 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cni-path\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505442 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxlb\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-kube-api-access-fdxlb\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505463 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61c1f5ff-7bdd-44fd-9e9d-dffea60bed12-kube-proxy\") pod \"kube-proxy-mxn4j\" (UID: \"61c1f5ff-7bdd-44fd-9e9d-dffea60bed12\") " pod="kube-system/kube-proxy-mxn4j" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505492 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61c1f5ff-7bdd-44fd-9e9d-dffea60bed12-lib-modules\") pod \"kube-proxy-mxn4j\" (UID: \"61c1f5ff-7bdd-44fd-9e9d-dffea60bed12\") " pod="kube-system/kube-proxy-mxn4j" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505511 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/553e625f-6202-4540-bd11-79ca63c5dc58-clustermesh-secrets\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.505752 kubelet[2651]: I0516 16:09:50.505537 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-xtables-lock\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506234 kubelet[2651]: I0516 16:09:50.505555 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwklc\" (UniqueName: \"kubernetes.io/projected/61c1f5ff-7bdd-44fd-9e9d-dffea60bed12-kube-api-access-gwklc\") pod \"kube-proxy-mxn4j\" (UID: \"61c1f5ff-7bdd-44fd-9e9d-dffea60bed12\") " pod="kube-system/kube-proxy-mxn4j" May 16 16:09:50.506234 kubelet[2651]: I0516 16:09:50.505576 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-lib-modules\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506234 kubelet[2651]: I0516 16:09:50.505594 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-net\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506234 kubelet[2651]: I0516 16:09:50.505612 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-kernel\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506234 kubelet[2651]: I0516 16:09:50.505632 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-etc-cni-netd\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506780 kubelet[2651]: I0516 16:09:50.505651 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-config-path\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506780 kubelet[2651]: I0516 16:09:50.506457 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61c1f5ff-7bdd-44fd-9e9d-dffea60bed12-xtables-lock\") pod \"kube-proxy-mxn4j\" (UID: \"61c1f5ff-7bdd-44fd-9e9d-dffea60bed12\") " pod="kube-system/kube-proxy-mxn4j" May 16 16:09:50.506780 kubelet[2651]: I0516 16:09:50.506504 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-bpf-maps\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506780 kubelet[2651]: I0516 16:09:50.506522 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-hostproc\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.506780 kubelet[2651]: I0516 16:09:50.506553 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-cgroup\") pod \"cilium-pgxc7\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " pod="kube-system/cilium-pgxc7" May 16 16:09:50.575496 systemd[1]: Created slice kubepods-besteffort-pod1cb098ea_319d_4585_9fa7_59eeb96761ce.slice - libcontainer container kubepods-besteffort-pod1cb098ea_319d_4585_9fa7_59eeb96761ce.slice. May 16 16:09:50.607010 kubelet[2651]: I0516 16:09:50.606968 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb098ea-319d-4585-9fa7-59eeb96761ce-cilium-config-path\") pod \"cilium-operator-5d85765b45-2cn6p\" (UID: \"1cb098ea-319d-4585-9fa7-59eeb96761ce\") " pod="kube-system/cilium-operator-5d85765b45-2cn6p" May 16 16:09:50.608200 kubelet[2651]: I0516 16:09:50.607412 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nws8k\" (UniqueName: \"kubernetes.io/projected/1cb098ea-319d-4585-9fa7-59eeb96761ce-kube-api-access-nws8k\") pod \"cilium-operator-5d85765b45-2cn6p\" (UID: \"1cb098ea-319d-4585-9fa7-59eeb96761ce\") " pod="kube-system/cilium-operator-5d85765b45-2cn6p" May 16 16:09:50.790053 kubelet[2651]: E0516 16:09:50.789998 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.790842 containerd[1531]: time="2025-05-16T16:09:50.790805847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxn4j,Uid:61c1f5ff-7bdd-44fd-9e9d-dffea60bed12,Namespace:kube-system,Attempt:0,}" May 16 16:09:50.798088 kubelet[2651]: E0516 16:09:50.798056 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.798422 containerd[1531]: time="2025-05-16T16:09:50.798361499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgxc7,Uid:553e625f-6202-4540-bd11-79ca63c5dc58,Namespace:kube-system,Attempt:0,}" May 16 16:09:50.821822 containerd[1531]: time="2025-05-16T16:09:50.821759074Z" level=info msg="connecting to shim 56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa" address="unix:///run/containerd/s/a2feeca8d46e15293667bb8bf11704f1ccc08ad1f34c10a46c7e6313cecbdccc" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:50.824365 containerd[1531]: time="2025-05-16T16:09:50.824330705Z" level=info msg="connecting to shim 6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:50.847626 systemd[1]: Started cri-containerd-56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa.scope - libcontainer container 56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa. May 16 16:09:50.850234 systemd[1]: Started cri-containerd-6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5.scope - libcontainer container 6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5. May 16 16:09:50.873679 containerd[1531]: time="2025-05-16T16:09:50.873645633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxn4j,Uid:61c1f5ff-7bdd-44fd-9e9d-dffea60bed12,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa\"" May 16 16:09:50.874526 kubelet[2651]: E0516 16:09:50.874502 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.878545 containerd[1531]: time="2025-05-16T16:09:50.878500703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgxc7,Uid:553e625f-6202-4540-bd11-79ca63c5dc58,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\"" May 16 16:09:50.878811 containerd[1531]: time="2025-05-16T16:09:50.878784212Z" level=info msg="CreateContainer within sandbox \"56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:09:50.879294 kubelet[2651]: E0516 16:09:50.879276 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.880239 kubelet[2651]: E0516 16:09:50.880018 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.880299 containerd[1531]: time="2025-05-16T16:09:50.880043841Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 16:09:50.880909 containerd[1531]: time="2025-05-16T16:09:50.880886687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2cn6p,Uid:1cb098ea-319d-4585-9fa7-59eeb96761ce,Namespace:kube-system,Attempt:0,}" May 16 16:09:50.894032 containerd[1531]: time="2025-05-16T16:09:50.894002222Z" level=info msg="Container 851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:50.901277 containerd[1531]: time="2025-05-16T16:09:50.901221192Z" level=info msg="CreateContainer within sandbox \"56d67e096a36e1786b8a30635c6af137958ce3af234679c5a13b75dc7102c1fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804\"" May 16 16:09:50.902006 containerd[1531]: time="2025-05-16T16:09:50.901984259Z" level=info msg="StartContainer for \"851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804\"" May 16 16:09:50.903566 containerd[1531]: time="2025-05-16T16:09:50.903533278Z" level=info msg="connecting to shim 76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368" address="unix:///run/containerd/s/11bd50ac2efc0b37a7fb38179459327e55adc98d0ff4e7f057e03955fbf96192" namespace=k8s.io protocol=ttrpc version=3 May 16 16:09:50.904746 containerd[1531]: time="2025-05-16T16:09:50.904711247Z" level=info msg="connecting to shim 851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804" address="unix:///run/containerd/s/a2feeca8d46e15293667bb8bf11704f1ccc08ad1f34c10a46c7e6313cecbdccc" protocol=ttrpc version=3 May 16 16:09:50.928619 systemd[1]: Started cri-containerd-76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368.scope - libcontainer container 76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368. May 16 16:09:50.929498 systemd[1]: Started cri-containerd-851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804.scope - libcontainer container 851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804. May 16 16:09:50.964238 containerd[1531]: time="2025-05-16T16:09:50.964195708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2cn6p,Uid:1cb098ea-319d-4585-9fa7-59eeb96761ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\"" May 16 16:09:50.965390 kubelet[2651]: E0516 16:09:50.965287 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:50.974775 containerd[1531]: time="2025-05-16T16:09:50.974722208Z" level=info msg="StartContainer for \"851229e4b8cf55cca83a44588541ea73a610910778f714bddf0d0cfde0e49804\" returns successfully" May 16 16:09:50.986093 kubelet[2651]: E0516 16:09:50.986054 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:51.109210 kubelet[2651]: E0516 16:09:51.108985 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:51.114873 kubelet[2651]: E0516 16:09:51.114850 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:51.119496 kubelet[2651]: I0516 16:09:51.118863 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxn4j" podStartSLOduration=1.118849654 podStartE2EDuration="1.118849654s" podCreationTimestamp="2025-05-16 16:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:09:51.118565709 +0000 UTC m=+7.118999212" watchObservedRunningTime="2025-05-16 16:09:51.118849654 +0000 UTC m=+7.119283117" May 16 16:09:54.176571 kubelet[2651]: E0516 16:09:54.176530 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:55.124129 kubelet[2651]: E0516 16:09:55.124083 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:56.798446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867083834.mount: Deactivated successfully. May 16 16:09:56.949812 kubelet[2651]: E0516 16:09:56.949645 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:58.192083 containerd[1531]: time="2025-05-16T16:09:58.192031235Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:58.192936 containerd[1531]: time="2025-05-16T16:09:58.192906895Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 16 16:09:58.193883 containerd[1531]: time="2025-05-16T16:09:58.193829722Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:58.195874 containerd[1531]: time="2025-05-16T16:09:58.195827760Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.315755312s" May 16 16:09:58.195874 containerd[1531]: time="2025-05-16T16:09:58.195872727Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 16:09:58.206072 containerd[1531]: time="2025-05-16T16:09:58.206037906Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 16:09:58.219034 containerd[1531]: time="2025-05-16T16:09:58.218972767Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:09:58.246616 containerd[1531]: time="2025-05-16T16:09:58.246228589Z" level=info msg="Container 5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:58.249364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145707724.mount: Deactivated successfully. May 16 16:09:58.251568 containerd[1531]: time="2025-05-16T16:09:58.251528113Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\"" May 16 16:09:58.252078 containerd[1531]: time="2025-05-16T16:09:58.252054277Z" level=info msg="StartContainer for \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\"" May 16 16:09:58.253153 containerd[1531]: time="2025-05-16T16:09:58.253118726Z" level=info msg="connecting to shim 5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" protocol=ttrpc version=3 May 16 16:09:58.297626 systemd[1]: Started cri-containerd-5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc.scope - libcontainer container 5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc. May 16 16:09:58.327369 containerd[1531]: time="2025-05-16T16:09:58.327332108Z" level=info msg="StartContainer for \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" returns successfully" May 16 16:09:58.372721 systemd[1]: cri-containerd-5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc.scope: Deactivated successfully. May 16 16:09:58.373032 systemd[1]: cri-containerd-5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc.scope: Consumed 55ms CPU time, 6.5M memory peak, 3.1M written to disk. May 16 16:09:58.434247 containerd[1531]: time="2025-05-16T16:09:58.434189050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" id:\"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" pid:3076 exited_at:{seconds:1747411798 nanos:425130647}" May 16 16:09:58.434988 containerd[1531]: time="2025-05-16T16:09:58.434945931Z" level=info msg="received exit event container_id:\"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" id:\"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" pid:3076 exited_at:{seconds:1747411798 nanos:425130647}" May 16 16:09:58.464965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc-rootfs.mount: Deactivated successfully. May 16 16:09:59.135524 kubelet[2651]: E0516 16:09:59.135483 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:09:59.139680 containerd[1531]: time="2025-05-16T16:09:59.139641484Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:09:59.175974 containerd[1531]: time="2025-05-16T16:09:59.175926896Z" level=info msg="Container 91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b: CDI devices from CRI Config.CDIDevices: []" May 16 16:09:59.185017 containerd[1531]: time="2025-05-16T16:09:59.184970065Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\"" May 16 16:09:59.185483 containerd[1531]: time="2025-05-16T16:09:59.185438415Z" level=info msg="StartContainer for \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\"" May 16 16:09:59.186375 containerd[1531]: time="2025-05-16T16:09:59.186351754Z" level=info msg="connecting to shim 91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" protocol=ttrpc version=3 May 16 16:09:59.211624 systemd[1]: Started cri-containerd-91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b.scope - libcontainer container 91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b. May 16 16:09:59.248769 containerd[1531]: time="2025-05-16T16:09:59.248735156Z" level=info msg="StartContainer for \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" returns successfully" May 16 16:09:59.272159 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:09:59.272808 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:59.274623 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:59.277270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:59.279347 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:09:59.280185 systemd[1]: cri-containerd-91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b.scope: Deactivated successfully. May 16 16:09:59.286407 containerd[1531]: time="2025-05-16T16:09:59.283722452Z" level=info msg="received exit event container_id:\"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" id:\"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" pid:3123 exited_at:{seconds:1747411799 nanos:283498178}" May 16 16:09:59.286407 containerd[1531]: time="2025-05-16T16:09:59.283752536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" id:\"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" pid:3123 exited_at:{seconds:1747411799 nanos:283498178}" May 16 16:09:59.307580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b-rootfs.mount: Deactivated successfully. May 16 16:09:59.308432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:59.493915 update_engine[1512]: I20250516 16:09:59.493690 1512 update_attempter.cc:509] Updating boot flags... May 16 16:10:00.140062 kubelet[2651]: E0516 16:10:00.140030 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:00.143867 containerd[1531]: time="2025-05-16T16:10:00.143814530Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:10:00.171183 containerd[1531]: time="2025-05-16T16:10:00.171070893Z" level=info msg="Container 7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:00.180871 containerd[1531]: time="2025-05-16T16:10:00.180804934Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\"" May 16 16:10:00.182239 containerd[1531]: time="2025-05-16T16:10:00.182209216Z" level=info msg="StartContainer for \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\"" May 16 16:10:00.184773 containerd[1531]: time="2025-05-16T16:10:00.184698774Z" level=info msg="connecting to shim 7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" protocol=ttrpc version=3 May 16 16:10:00.207723 systemd[1]: Started cri-containerd-7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed.scope - libcontainer container 7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed. May 16 16:10:00.248632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46694696.mount: Deactivated successfully. May 16 16:10:00.259438 containerd[1531]: time="2025-05-16T16:10:00.259397445Z" level=info msg="StartContainer for \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" returns successfully" May 16 16:10:00.271345 systemd[1]: cri-containerd-7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed.scope: Deactivated successfully. May 16 16:10:00.272670 containerd[1531]: time="2025-05-16T16:10:00.272590223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" id:\"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" pid:3196 exited_at:{seconds:1747411800 nanos:272245534}" May 16 16:10:00.272670 containerd[1531]: time="2025-05-16T16:10:00.272633230Z" level=info msg="received exit event container_id:\"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" id:\"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" pid:3196 exited_at:{seconds:1747411800 nanos:272245534}" May 16 16:10:00.292483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed-rootfs.mount: Deactivated successfully. May 16 16:10:00.450434 containerd[1531]: time="2025-05-16T16:10:00.450313601Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:00.451056 containerd[1531]: time="2025-05-16T16:10:00.450995540Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 16 16:10:00.451652 containerd[1531]: time="2025-05-16T16:10:00.451616909Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:00.452998 containerd[1531]: time="2025-05-16T16:10:00.452960502Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.24688455s" May 16 16:10:00.452998 containerd[1531]: time="2025-05-16T16:10:00.452995507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 16:10:00.456111 containerd[1531]: time="2025-05-16T16:10:00.456063149Z" level=info msg="CreateContainer within sandbox \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 16:10:00.462345 containerd[1531]: time="2025-05-16T16:10:00.461781252Z" level=info msg="Container 933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:00.467916 containerd[1531]: time="2025-05-16T16:10:00.467880930Z" level=info msg="CreateContainer within sandbox \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\"" May 16 16:10:00.468503 containerd[1531]: time="2025-05-16T16:10:00.468463094Z" level=info msg="StartContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\"" May 16 16:10:00.469420 containerd[1531]: time="2025-05-16T16:10:00.469349381Z" level=info msg="connecting to shim 933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b" address="unix:///run/containerd/s/11bd50ac2efc0b37a7fb38179459327e55adc98d0ff4e7f057e03955fbf96192" protocol=ttrpc version=3 May 16 16:10:00.499684 systemd[1]: Started cri-containerd-933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b.scope - libcontainer container 933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b. May 16 16:10:00.527652 containerd[1531]: time="2025-05-16T16:10:00.527614767Z" level=info msg="StartContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" returns successfully" May 16 16:10:01.147329 kubelet[2651]: E0516 16:10:01.146978 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:01.150248 containerd[1531]: time="2025-05-16T16:10:01.150209853Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:10:01.153674 kubelet[2651]: E0516 16:10:01.153643 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:01.191493 containerd[1531]: time="2025-05-16T16:10:01.191194746Z" level=info msg="Container 0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:01.197518 containerd[1531]: time="2025-05-16T16:10:01.197435321Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\"" May 16 16:10:01.203810 containerd[1531]: time="2025-05-16T16:10:01.203774909Z" level=info msg="StartContainer for \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\"" May 16 16:10:01.207579 containerd[1531]: time="2025-05-16T16:10:01.207531303Z" level=info msg="connecting to shim 0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" protocol=ttrpc version=3 May 16 16:10:01.232699 systemd[1]: Started cri-containerd-0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d.scope - libcontainer container 0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d. May 16 16:10:01.262136 systemd[1]: cri-containerd-0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d.scope: Deactivated successfully. May 16 16:10:01.267684 containerd[1531]: time="2025-05-16T16:10:01.267612251Z" level=info msg="received exit event container_id:\"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" id:\"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" pid:3274 exited_at:{seconds:1747411801 nanos:265628059}" May 16 16:10:01.267684 containerd[1531]: time="2025-05-16T16:10:01.267676540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" id:\"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" pid:3274 exited_at:{seconds:1747411801 nanos:265628059}" May 16 16:10:01.273181 containerd[1531]: time="2025-05-16T16:10:01.273132607Z" level=info msg="StartContainer for \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" returns successfully" May 16 16:10:01.290658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d-rootfs.mount: Deactivated successfully. May 16 16:10:02.159682 kubelet[2651]: E0516 16:10:02.159442 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:02.159682 kubelet[2651]: E0516 16:10:02.159511 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:02.163535 containerd[1531]: time="2025-05-16T16:10:02.163486476Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:10:02.193691 kubelet[2651]: I0516 16:10:02.193610 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2cn6p" podStartSLOduration=2.705573933 podStartE2EDuration="12.19357316s" podCreationTimestamp="2025-05-16 16:09:50 +0000 UTC" firstStartedPulling="2025-05-16 16:09:50.965949818 +0000 UTC m=+6.966383321" lastFinishedPulling="2025-05-16 16:10:00.453949045 +0000 UTC m=+16.454382548" observedRunningTime="2025-05-16 16:10:01.211979352 +0000 UTC m=+17.212412855" watchObservedRunningTime="2025-05-16 16:10:02.19357316 +0000 UTC m=+18.194006743" May 16 16:10:02.199279 containerd[1531]: time="2025-05-16T16:10:02.198702949Z" level=info msg="Container 7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:02.204627 containerd[1531]: time="2025-05-16T16:10:02.204585156Z" level=info msg="CreateContainer within sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\"" May 16 16:10:02.205207 containerd[1531]: time="2025-05-16T16:10:02.205178753Z" level=info msg="StartContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\"" May 16 16:10:02.207435 containerd[1531]: time="2025-05-16T16:10:02.207044476Z" level=info msg="connecting to shim 7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102" address="unix:///run/containerd/s/8359c9f257d0081c34ae0861974c7007ed8429fb0961b96ff704a09c2c2b6860" protocol=ttrpc version=3 May 16 16:10:02.233694 systemd[1]: Started cri-containerd-7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102.scope - libcontainer container 7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102. May 16 16:10:02.267275 containerd[1531]: time="2025-05-16T16:10:02.267239686Z" level=info msg="StartContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" returns successfully" May 16 16:10:02.378262 containerd[1531]: time="2025-05-16T16:10:02.378200196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" id:\"42d018620103d1ef04a600a7b741f7348460d852437cc169bb4ca55d51e85c01\" pid:3341 exited_at:{seconds:1747411802 nanos:377923520}" May 16 16:10:02.398609 kubelet[2651]: I0516 16:10:02.398572 2651 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 16:10:02.451734 systemd[1]: Created slice kubepods-burstable-pod937c1364_2cbe_4799_af83_6d6e74ec6956.slice - libcontainer container kubepods-burstable-pod937c1364_2cbe_4799_af83_6d6e74ec6956.slice. May 16 16:10:02.461507 systemd[1]: Created slice kubepods-burstable-pod2c9f74a3_f2e4_4cab_81fa_c04996004d5b.slice - libcontainer container kubepods-burstable-pod2c9f74a3_f2e4_4cab_81fa_c04996004d5b.slice. May 16 16:10:02.490368 kubelet[2651]: I0516 16:10:02.490245 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/937c1364-2cbe-4799-af83-6d6e74ec6956-config-volume\") pod \"coredns-7c65d6cfc9-sk522\" (UID: \"937c1364-2cbe-4799-af83-6d6e74ec6956\") " pod="kube-system/coredns-7c65d6cfc9-sk522" May 16 16:10:02.490368 kubelet[2651]: I0516 16:10:02.490291 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnq92\" (UniqueName: \"kubernetes.io/projected/2c9f74a3-f2e4-4cab-81fa-c04996004d5b-kube-api-access-cnq92\") pod \"coredns-7c65d6cfc9-jn52h\" (UID: \"2c9f74a3-f2e4-4cab-81fa-c04996004d5b\") " pod="kube-system/coredns-7c65d6cfc9-jn52h" May 16 16:10:02.490368 kubelet[2651]: I0516 16:10:02.490314 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664p6\" (UniqueName: \"kubernetes.io/projected/937c1364-2cbe-4799-af83-6d6e74ec6956-kube-api-access-664p6\") pod \"coredns-7c65d6cfc9-sk522\" (UID: \"937c1364-2cbe-4799-af83-6d6e74ec6956\") " pod="kube-system/coredns-7c65d6cfc9-sk522" May 16 16:10:02.490368 kubelet[2651]: I0516 16:10:02.490331 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c9f74a3-f2e4-4cab-81fa-c04996004d5b-config-volume\") pod \"coredns-7c65d6cfc9-jn52h\" (UID: \"2c9f74a3-f2e4-4cab-81fa-c04996004d5b\") " pod="kube-system/coredns-7c65d6cfc9-jn52h" May 16 16:10:02.758542 kubelet[2651]: E0516 16:10:02.758419 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:02.759785 containerd[1531]: time="2025-05-16T16:10:02.759743551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sk522,Uid:937c1364-2cbe-4799-af83-6d6e74ec6956,Namespace:kube-system,Attempt:0,}" May 16 16:10:02.769705 kubelet[2651]: E0516 16:10:02.769607 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:02.771208 containerd[1531]: time="2025-05-16T16:10:02.771156079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jn52h,Uid:2c9f74a3-f2e4-4cab-81fa-c04996004d5b,Namespace:kube-system,Attempt:0,}" May 16 16:10:03.165977 kubelet[2651]: E0516 16:10:03.165922 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:03.180640 kubelet[2651]: I0516 16:10:03.180540 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pgxc7" podStartSLOduration=5.854617487 podStartE2EDuration="13.180516604s" podCreationTimestamp="2025-05-16 16:09:50 +0000 UTC" firstStartedPulling="2025-05-16 16:09:50.879739366 +0000 UTC m=+6.880172829" lastFinishedPulling="2025-05-16 16:09:58.205638443 +0000 UTC m=+14.206071946" observedRunningTime="2025-05-16 16:10:03.179876124 +0000 UTC m=+19.180309627" watchObservedRunningTime="2025-05-16 16:10:03.180516604 +0000 UTC m=+19.180950507" May 16 16:10:04.167212 kubelet[2651]: E0516 16:10:04.167156 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:04.536876 systemd-networkd[1440]: cilium_host: Link UP May 16 16:10:04.537115 systemd-networkd[1440]: cilium_net: Link UP May 16 16:10:04.537269 systemd-networkd[1440]: cilium_net: Gained carrier May 16 16:10:04.537378 systemd-networkd[1440]: cilium_host: Gained carrier May 16 16:10:04.578565 systemd-networkd[1440]: cilium_host: Gained IPv6LL May 16 16:10:04.638964 systemd-networkd[1440]: cilium_vxlan: Link UP May 16 16:10:04.638979 systemd-networkd[1440]: cilium_vxlan: Gained carrier May 16 16:10:04.939076 kernel: NET: Registered PF_ALG protocol family May 16 16:10:05.171282 kubelet[2651]: E0516 16:10:05.171234 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:05.260774 systemd-networkd[1440]: cilium_net: Gained IPv6LL May 16 16:10:05.501428 systemd-networkd[1440]: lxc_health: Link UP May 16 16:10:05.502195 systemd-networkd[1440]: lxc_health: Gained carrier May 16 16:10:05.836705 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL May 16 16:10:05.859655 systemd-networkd[1440]: lxc763941cb4f99: Link UP May 16 16:10:05.867495 kernel: eth0: renamed from tmp3fd8b May 16 16:10:05.874848 kernel: eth0: renamed from tmpea67d May 16 16:10:05.876652 systemd-networkd[1440]: lxc8b9f7e0b1865: Link UP May 16 16:10:05.877049 systemd-networkd[1440]: lxc8b9f7e0b1865: Gained carrier May 16 16:10:05.877190 systemd-networkd[1440]: lxc763941cb4f99: Gained carrier May 16 16:10:06.811059 kubelet[2651]: E0516 16:10:06.811000 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:06.988723 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 16 16:10:07.116740 systemd-networkd[1440]: lxc8b9f7e0b1865: Gained IPv6LL May 16 16:10:07.308620 systemd-networkd[1440]: lxc763941cb4f99: Gained IPv6LL May 16 16:10:09.424488 containerd[1531]: time="2025-05-16T16:10:09.424347669Z" level=info msg="connecting to shim ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e" address="unix:///run/containerd/s/c4f2b4ccd8f0771f8171daf160efa0e39ec8df13e4b454b003e318ae35f4dd6d" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:09.424488 containerd[1531]: time="2025-05-16T16:10:09.424439358Z" level=info msg="connecting to shim 3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4" address="unix:///run/containerd/s/25b1283e027c2f05f56b55c4618a8ced612d5126da9d214f14cbe01a1bd808df" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:09.445670 systemd[1]: Started cri-containerd-ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e.scope - libcontainer container ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e. May 16 16:10:09.448872 systemd[1]: Started cri-containerd-3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4.scope - libcontainer container 3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4. May 16 16:10:09.459444 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:10:09.461597 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:10:09.484226 containerd[1531]: time="2025-05-16T16:10:09.484181258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sk522,Uid:937c1364-2cbe-4799-af83-6d6e74ec6956,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4\"" May 16 16:10:09.485501 kubelet[2651]: E0516 16:10:09.485129 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:09.488198 containerd[1531]: time="2025-05-16T16:10:09.488162715Z" level=info msg="CreateContainer within sandbox \"3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:10:09.489724 containerd[1531]: time="2025-05-16T16:10:09.489679819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jn52h,Uid:2c9f74a3-f2e4-4cab-81fa-c04996004d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e\"" May 16 16:10:09.491094 kubelet[2651]: E0516 16:10:09.491076 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:09.492790 containerd[1531]: time="2025-05-16T16:10:09.492754911Z" level=info msg="CreateContainer within sandbox \"ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:10:09.497933 containerd[1531]: time="2025-05-16T16:10:09.497885517Z" level=info msg="Container 34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:09.504081 containerd[1531]: time="2025-05-16T16:10:09.503551814Z" level=info msg="Container 5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:09.514793 containerd[1531]: time="2025-05-16T16:10:09.514756715Z" level=info msg="CreateContainer within sandbox \"3fd8bdd40d028993ec19fd45d0be42b2d45da13ec94bbfceb9f7c5f5994b91a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69\"" May 16 16:10:09.515444 containerd[1531]: time="2025-05-16T16:10:09.515414057Z" level=info msg="StartContainer for \"34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69\"" May 16 16:10:09.516273 containerd[1531]: time="2025-05-16T16:10:09.516244576Z" level=info msg="connecting to shim 34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69" address="unix:///run/containerd/s/25b1283e027c2f05f56b55c4618a8ced612d5126da9d214f14cbe01a1bd808df" protocol=ttrpc version=3 May 16 16:10:09.517930 containerd[1531]: time="2025-05-16T16:10:09.517838367Z" level=info msg="CreateContainer within sandbox \"ea67def393968ef2e5e43c9100351a70962faa315b43d081f47da3f6349ce84e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9\"" May 16 16:10:09.518737 containerd[1531]: time="2025-05-16T16:10:09.518708650Z" level=info msg="StartContainer for \"5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9\"" May 16 16:10:09.520454 containerd[1531]: time="2025-05-16T16:10:09.520419692Z" level=info msg="connecting to shim 5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9" address="unix:///run/containerd/s/c4f2b4ccd8f0771f8171daf160efa0e39ec8df13e4b454b003e318ae35f4dd6d" protocol=ttrpc version=3 May 16 16:10:09.536647 systemd[1]: Started cri-containerd-34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69.scope - libcontainer container 34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69. May 16 16:10:09.539511 systemd[1]: Started cri-containerd-5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9.scope - libcontainer container 5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9. May 16 16:10:09.581384 containerd[1531]: time="2025-05-16T16:10:09.579244025Z" level=info msg="StartContainer for \"5379719c1c26ba33d79982ba1c2cdf6ec0e9e6402823610d22f0dbc14dfe7ca9\" returns successfully" May 16 16:10:09.581729 containerd[1531]: time="2025-05-16T16:10:09.581690657Z" level=info msg="StartContainer for \"34c6b9ad6dfe5e7c47270c95ab8e80efe41ea7b697852185cecf94ac8960ca69\" returns successfully" May 16 16:10:10.188166 kubelet[2651]: E0516 16:10:10.187656 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:10.192692 kubelet[2651]: E0516 16:10:10.192600 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:10.210005 kubelet[2651]: I0516 16:10:10.209944 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jn52h" podStartSLOduration=20.209929805 podStartE2EDuration="20.209929805s" podCreationTimestamp="2025-05-16 16:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:10.209589414 +0000 UTC m=+26.210022917" watchObservedRunningTime="2025-05-16 16:10:10.209929805 +0000 UTC m=+26.210363308" May 16 16:10:10.243122 kubelet[2651]: I0516 16:10:10.242914 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sk522" podStartSLOduration=20.242894759 podStartE2EDuration="20.242894759s" podCreationTimestamp="2025-05-16 16:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:10.242637176 +0000 UTC m=+26.243070679" watchObservedRunningTime="2025-05-16 16:10:10.242894759 +0000 UTC m=+26.243328262" May 16 16:10:12.759811 kubelet[2651]: E0516 16:10:12.759781 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:12.770666 kubelet[2651]: E0516 16:10:12.770218 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:12.917700 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:32792.service - OpenSSH per-connection server daemon (10.0.0.1:32792). May 16 16:10:12.971161 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 32792 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:12.972414 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:12.976829 systemd-logind[1510]: New session 8 of user core. May 16 16:10:12.990859 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:10:13.112946 sshd[4000]: Connection closed by 10.0.0.1 port 32792 May 16 16:10:13.113279 sshd-session[3998]: pam_unix(sshd:session): session closed for user core May 16 16:10:13.116116 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:32792.service: Deactivated successfully. May 16 16:10:13.119092 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:10:13.120538 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. May 16 16:10:13.122255 systemd-logind[1510]: Removed session 8. May 16 16:10:13.194056 kubelet[2651]: E0516 16:10:13.193714 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:13.194975 kubelet[2651]: E0516 16:10:13.194948 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:15.728006 kubelet[2651]: I0516 16:10:15.727963 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:10:15.728619 kubelet[2651]: E0516 16:10:15.728395 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:16.196779 kubelet[2651]: E0516 16:10:16.196695 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:18.130053 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:32806.service - OpenSSH per-connection server daemon (10.0.0.1:32806). May 16 16:10:18.190805 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 32806 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:18.191860 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:18.196893 systemd-logind[1510]: New session 9 of user core. May 16 16:10:18.207668 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:10:18.323144 sshd[4020]: Connection closed by 10.0.0.1 port 32806 May 16 16:10:18.323445 sshd-session[4018]: pam_unix(sshd:session): session closed for user core May 16 16:10:18.326771 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. May 16 16:10:18.326988 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:32806.service: Deactivated successfully. May 16 16:10:18.328531 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:10:18.330017 systemd-logind[1510]: Removed session 9. May 16 16:10:23.338615 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:56214.service - OpenSSH per-connection server daemon (10.0.0.1:56214). May 16 16:10:23.406318 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 56214 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:23.407439 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:23.411588 systemd-logind[1510]: New session 10 of user core. May 16 16:10:23.422643 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:10:23.534513 sshd[4041]: Connection closed by 10.0.0.1 port 56214 May 16 16:10:23.534632 sshd-session[4039]: pam_unix(sshd:session): session closed for user core May 16 16:10:23.549260 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:56214.service: Deactivated successfully. May 16 16:10:23.550873 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:10:23.552674 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. May 16 16:10:23.554655 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:56222.service - OpenSSH per-connection server daemon (10.0.0.1:56222). May 16 16:10:23.556171 systemd-logind[1510]: Removed session 10. May 16 16:10:23.611658 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 56222 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:23.612734 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:23.616992 systemd-logind[1510]: New session 11 of user core. May 16 16:10:23.626680 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:10:23.780822 sshd[4057]: Connection closed by 10.0.0.1 port 56222 May 16 16:10:23.781775 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 16 16:10:23.792143 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:56222.service: Deactivated successfully. May 16 16:10:23.796629 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:10:23.801686 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. May 16 16:10:23.805771 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:56238.service - OpenSSH per-connection server daemon (10.0.0.1:56238). May 16 16:10:23.810092 systemd-logind[1510]: Removed session 11. May 16 16:10:23.864344 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 56238 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:23.865653 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:23.869812 systemd-logind[1510]: New session 12 of user core. May 16 16:10:23.878608 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:10:23.995573 sshd[4070]: Connection closed by 10.0.0.1 port 56238 May 16 16:10:23.996086 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 16 16:10:23.998915 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:56238.service: Deactivated successfully. May 16 16:10:24.000582 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:10:24.001813 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. May 16 16:10:24.003926 systemd-logind[1510]: Removed session 12. May 16 16:10:29.009923 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:56242.service - OpenSSH per-connection server daemon (10.0.0.1:56242). May 16 16:10:29.068142 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 56242 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:29.069560 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:29.074203 systemd-logind[1510]: New session 13 of user core. May 16 16:10:29.089688 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:10:29.201939 sshd[4086]: Connection closed by 10.0.0.1 port 56242 May 16 16:10:29.202250 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 16 16:10:29.205572 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. May 16 16:10:29.205660 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:56242.service: Deactivated successfully. May 16 16:10:29.207210 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:10:29.209358 systemd-logind[1510]: Removed session 13. May 16 16:10:34.220713 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:46784.service - OpenSSH per-connection server daemon (10.0.0.1:46784). May 16 16:10:34.273061 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 46784 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:34.274358 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:34.278959 systemd-logind[1510]: New session 14 of user core. May 16 16:10:34.289670 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:10:34.403954 sshd[4102]: Connection closed by 10.0.0.1 port 46784 May 16 16:10:34.404700 sshd-session[4100]: pam_unix(sshd:session): session closed for user core May 16 16:10:34.414275 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:46784.service: Deactivated successfully. May 16 16:10:34.416082 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:10:34.418144 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. May 16 16:10:34.420622 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:46792.service - OpenSSH per-connection server daemon (10.0.0.1:46792). May 16 16:10:34.421228 systemd-logind[1510]: Removed session 14. May 16 16:10:34.483152 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 46792 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:34.484727 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:34.489351 systemd-logind[1510]: New session 15 of user core. May 16 16:10:34.504637 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:10:34.701023 sshd[4117]: Connection closed by 10.0.0.1 port 46792 May 16 16:10:34.701714 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 16 16:10:34.713763 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:46792.service: Deactivated successfully. May 16 16:10:34.715300 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:10:34.715959 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. May 16 16:10:34.718327 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:46806.service - OpenSSH per-connection server daemon (10.0.0.1:46806). May 16 16:10:34.719000 systemd-logind[1510]: Removed session 15. May 16 16:10:34.766540 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 46806 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:34.767696 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:34.771504 systemd-logind[1510]: New session 16 of user core. May 16 16:10:34.780714 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:10:36.067563 sshd[4131]: Connection closed by 10.0.0.1 port 46806 May 16 16:10:36.068552 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 16 16:10:36.080349 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:46806.service: Deactivated successfully. May 16 16:10:36.082879 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:10:36.084979 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. May 16 16:10:36.089744 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:46816.service - OpenSSH per-connection server daemon (10.0.0.1:46816). May 16 16:10:36.093138 systemd-logind[1510]: Removed session 16. May 16 16:10:36.137920 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 46816 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:36.139113 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:36.142941 systemd-logind[1510]: New session 17 of user core. May 16 16:10:36.157632 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:10:36.373152 sshd[4153]: Connection closed by 10.0.0.1 port 46816 May 16 16:10:36.373832 sshd-session[4151]: pam_unix(sshd:session): session closed for user core May 16 16:10:36.384765 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:46816.service: Deactivated successfully. May 16 16:10:36.386448 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:10:36.387223 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. May 16 16:10:36.389923 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:46826.service - OpenSSH per-connection server daemon (10.0.0.1:46826). May 16 16:10:36.390644 systemd-logind[1510]: Removed session 17. May 16 16:10:36.447491 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 46826 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:36.447374 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:36.451636 systemd-logind[1510]: New session 18 of user core. May 16 16:10:36.465607 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:10:36.575699 sshd[4166]: Connection closed by 10.0.0.1 port 46826 May 16 16:10:36.576024 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 16:10:36.579364 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:46826.service: Deactivated successfully. May 16 16:10:36.581064 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:10:36.581730 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. May 16 16:10:36.583061 systemd-logind[1510]: Removed session 18. May 16 16:10:41.587622 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). May 16 16:10:41.636940 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:41.638032 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:41.642298 systemd-logind[1510]: New session 19 of user core. May 16 16:10:41.645611 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:10:41.755131 sshd[4184]: Connection closed by 10.0.0.1 port 46828 May 16 16:10:41.755732 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 16 16:10:41.759218 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:46828.service: Deactivated successfully. May 16 16:10:41.761918 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:10:41.762644 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. May 16 16:10:41.764012 systemd-logind[1510]: Removed session 19. May 16 16:10:46.772333 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:33758.service - OpenSSH per-connection server daemon (10.0.0.1:33758). May 16 16:10:46.833837 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 33758 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:46.835231 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:46.839346 systemd-logind[1510]: New session 20 of user core. May 16 16:10:46.849646 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:10:46.956918 sshd[4201]: Connection closed by 10.0.0.1 port 33758 May 16 16:10:46.957037 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 16 16:10:46.960714 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:33758.service: Deactivated successfully. May 16 16:10:46.962989 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:10:46.963831 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. May 16 16:10:46.966017 systemd-logind[1510]: Removed session 20. May 16 16:10:51.974186 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). May 16 16:10:52.034428 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:52.032274 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:52.037550 systemd-logind[1510]: New session 21 of user core. May 16 16:10:52.046618 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:10:52.163932 sshd[4218]: Connection closed by 10.0.0.1 port 33768 May 16 16:10:52.164463 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 16 16:10:52.174587 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:33768.service: Deactivated successfully. May 16 16:10:52.176650 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:10:52.178694 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. May 16 16:10:52.180219 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). May 16 16:10:52.184299 systemd-logind[1510]: Removed session 21. May 16 16:10:52.259962 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:52.260804 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:52.264677 systemd-logind[1510]: New session 22 of user core. May 16 16:10:52.279655 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:10:54.260921 containerd[1531]: time="2025-05-16T16:10:54.260873761Z" level=info msg="StopContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" with timeout 30 (s)" May 16 16:10:54.261273 containerd[1531]: time="2025-05-16T16:10:54.261256591Z" level=info msg="Stop container \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" with signal terminated" May 16 16:10:54.274423 systemd[1]: cri-containerd-933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b.scope: Deactivated successfully. May 16 16:10:54.276360 containerd[1531]: time="2025-05-16T16:10:54.276329227Z" level=info msg="received exit event container_id:\"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" id:\"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" pid:3239 exited_at:{seconds:1747411854 nanos:276062114}" May 16 16:10:54.276594 containerd[1531]: time="2025-05-16T16:10:54.276418025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" id:\"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" pid:3239 exited_at:{seconds:1747411854 nanos:276062114}" May 16 16:10:54.296511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b-rootfs.mount: Deactivated successfully. May 16 16:10:54.309863 containerd[1531]: time="2025-05-16T16:10:54.309782651Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:10:54.312007 containerd[1531]: time="2025-05-16T16:10:54.311967913Z" level=info msg="StopContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" returns successfully" May 16 16:10:54.315598 containerd[1531]: time="2025-05-16T16:10:54.315557337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" id:\"d61e93d37b041ca6c4378a318490d66ca61b85325677e2e9999146fcc8e7d85f\" pid:4261 exited_at:{seconds:1747411854 nanos:315041110}" May 16 16:10:54.315685 containerd[1531]: time="2025-05-16T16:10:54.315631415Z" level=info msg="StopPodSandbox for \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\"" May 16 16:10:54.317150 containerd[1531]: time="2025-05-16T16:10:54.317105175Z" level=info msg="StopContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" with timeout 2 (s)" May 16 16:10:54.317435 containerd[1531]: time="2025-05-16T16:10:54.317410367Z" level=info msg="Stop container \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" with signal terminated" May 16 16:10:54.323925 systemd-networkd[1440]: lxc_health: Link DOWN May 16 16:10:54.323932 systemd-networkd[1440]: lxc_health: Lost carrier May 16 16:10:54.328578 containerd[1531]: time="2025-05-16T16:10:54.328524149Z" level=info msg="Container to stop \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.336763 systemd[1]: cri-containerd-76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368.scope: Deactivated successfully. May 16 16:10:54.342969 containerd[1531]: time="2025-05-16T16:10:54.342927004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" id:\"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" pid:2870 exit_status:137 exited_at:{seconds:1747411854 nanos:342675970}" May 16 16:10:54.348577 systemd[1]: cri-containerd-7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102.scope: Deactivated successfully. May 16 16:10:54.349559 systemd[1]: cri-containerd-7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102.scope: Consumed 6.467s CPU time, 122.5M memory peak, 2.6M read from disk, 12.9M written to disk. May 16 16:10:54.350877 containerd[1531]: time="2025-05-16T16:10:54.350819312Z" level=info msg="received exit event container_id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" pid:3310 exited_at:{seconds:1747411854 nanos:350568039}" May 16 16:10:54.371817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368-rootfs.mount: Deactivated successfully. May 16 16:10:54.377042 containerd[1531]: time="2025-05-16T16:10:54.377007611Z" level=info msg="shim disconnected" id=76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368 namespace=k8s.io May 16 16:10:54.377204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102-rootfs.mount: Deactivated successfully. May 16 16:10:54.396796 containerd[1531]: time="2025-05-16T16:10:54.377041130Z" level=warning msg="cleaning up after shim disconnected" id=76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368 namespace=k8s.io May 16 16:10:54.396796 containerd[1531]: time="2025-05-16T16:10:54.396787321Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:10:54.397013 containerd[1531]: time="2025-05-16T16:10:54.393710323Z" level=info msg="StopContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" returns successfully" May 16 16:10:54.397338 containerd[1531]: time="2025-05-16T16:10:54.397310707Z" level=info msg="StopPodSandbox for \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\"" May 16 16:10:54.397388 containerd[1531]: time="2025-05-16T16:10:54.397373225Z" level=info msg="Container to stop \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.397412 containerd[1531]: time="2025-05-16T16:10:54.397385665Z" level=info msg="Container to stop \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.397412 containerd[1531]: time="2025-05-16T16:10:54.397394185Z" level=info msg="Container to stop \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.397412 containerd[1531]: time="2025-05-16T16:10:54.397402345Z" level=info msg="Container to stop \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.397492 containerd[1531]: time="2025-05-16T16:10:54.397411584Z" level=info msg="Container to stop \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:10:54.403804 systemd[1]: cri-containerd-6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5.scope: Deactivated successfully. May 16 16:10:54.418364 containerd[1531]: time="2025-05-16T16:10:54.418321904Z" level=info msg="received exit event sandbox_id:\"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" exit_status:137 exited_at:{seconds:1747411854 nanos:342675970}" May 16 16:10:54.419691 containerd[1531]: time="2025-05-16T16:10:54.419652109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" id:\"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" pid:3310 exited_at:{seconds:1747411854 nanos:350568039}" May 16 16:10:54.419768 containerd[1531]: time="2025-05-16T16:10:54.419700387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" id:\"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" pid:2807 exit_status:137 exited_at:{seconds:1747411854 nanos:404908064}" May 16 16:10:54.419768 containerd[1531]: time="2025-05-16T16:10:54.419739786Z" level=info msg="TearDown network for sandbox \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" successfully" May 16 16:10:54.419768 containerd[1531]: time="2025-05-16T16:10:54.419761506Z" level=info msg="StopPodSandbox for \"76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368\" returns successfully" May 16 16:10:54.420971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76f3968e0e78305f1eee73383c76db8665f9b1f0aa09ace9b919e9ef2117a368-shm.mount: Deactivated successfully. May 16 16:10:54.430335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5-rootfs.mount: Deactivated successfully. May 16 16:10:54.450558 containerd[1531]: time="2025-05-16T16:10:54.450517482Z" level=info msg="received exit event sandbox_id:\"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" exit_status:137 exited_at:{seconds:1747411854 nanos:404908064}" May 16 16:10:54.450750 containerd[1531]: time="2025-05-16T16:10:54.450722396Z" level=info msg="TearDown network for sandbox \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" successfully" May 16 16:10:54.450804 containerd[1531]: time="2025-05-16T16:10:54.450751356Z" level=info msg="StopPodSandbox for \"6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5\" returns successfully" May 16 16:10:54.451393 containerd[1531]: time="2025-05-16T16:10:54.451364139Z" level=info msg="shim disconnected" id=6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5 namespace=k8s.io May 16 16:10:54.451532 containerd[1531]: time="2025-05-16T16:10:54.451390699Z" level=warning msg="cleaning up after shim disconnected" id=6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5 namespace=k8s.io May 16 16:10:54.451532 containerd[1531]: time="2025-05-16T16:10:54.451418338Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:10:54.603301 kubelet[2651]: I0516 16:10:54.603253 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-bpf-maps\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603301 kubelet[2651]: I0516 16:10:54.603305 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-hostproc\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603329 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb098ea-319d-4585-9fa7-59eeb96761ce-cilium-config-path\") pod \"1cb098ea-319d-4585-9fa7-59eeb96761ce\" (UID: \"1cb098ea-319d-4585-9fa7-59eeb96761ce\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603350 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-hubble-tls\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603366 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/553e625f-6202-4540-bd11-79ca63c5dc58-clustermesh-secrets\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603380 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-cgroup\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603396 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-config-path\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603720 kubelet[2651]: I0516 16:10:54.603410 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-xtables-lock\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603436 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cni-path\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603453 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-run\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603525 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-etc-cni-netd\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603543 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-net\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603557 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-kernel\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603851 kubelet[2651]: I0516 16:10:54.603574 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxlb\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-kube-api-access-fdxlb\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603976 kubelet[2651]: I0516 16:10:54.603589 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-lib-modules\") pod \"553e625f-6202-4540-bd11-79ca63c5dc58\" (UID: \"553e625f-6202-4540-bd11-79ca63c5dc58\") " May 16 16:10:54.603976 kubelet[2651]: I0516 16:10:54.603605 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nws8k\" (UniqueName: \"kubernetes.io/projected/1cb098ea-319d-4585-9fa7-59eeb96761ce-kube-api-access-nws8k\") pod \"1cb098ea-319d-4585-9fa7-59eeb96761ce\" (UID: \"1cb098ea-319d-4585-9fa7-59eeb96761ce\") " May 16 16:10:54.608487 kubelet[2651]: I0516 16:10:54.606906 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-hostproc" (OuterVolumeSpecName: "hostproc") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608487 kubelet[2651]: I0516 16:10:54.606976 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608487 kubelet[2651]: I0516 16:10:54.606996 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608487 kubelet[2651]: I0516 16:10:54.607013 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608487 kubelet[2651]: I0516 16:10:54.607163 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cni-path" (OuterVolumeSpecName: "cni-path") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608657 kubelet[2651]: I0516 16:10:54.607454 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608657 kubelet[2651]: I0516 16:10:54.607531 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608657 kubelet[2651]: I0516 16:10:54.607552 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.608657 kubelet[2651]: I0516 16:10:54.607569 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.609780 kubelet[2651]: I0516 16:10:54.608910 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb098ea-319d-4585-9fa7-59eeb96761ce-kube-api-access-nws8k" (OuterVolumeSpecName: "kube-api-access-nws8k") pod "1cb098ea-319d-4585-9fa7-59eeb96761ce" (UID: "1cb098ea-319d-4585-9fa7-59eeb96761ce"). InnerVolumeSpecName "kube-api-access-nws8k". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:10:54.609952 kubelet[2651]: I0516 16:10:54.609906 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:10:54.610293 kubelet[2651]: I0516 16:10:54.610269 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb098ea-319d-4585-9fa7-59eeb96761ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cb098ea-319d-4585-9fa7-59eeb96761ce" (UID: "1cb098ea-319d-4585-9fa7-59eeb96761ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 16:10:54.610341 kubelet[2651]: I0516 16:10:54.610313 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:10:54.611539 kubelet[2651]: I0516 16:10:54.610879 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 16:10:54.611539 kubelet[2651]: I0516 16:10:54.610986 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-kube-api-access-fdxlb" (OuterVolumeSpecName: "kube-api-access-fdxlb") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "kube-api-access-fdxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:10:54.612115 kubelet[2651]: I0516 16:10:54.612078 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553e625f-6202-4540-bd11-79ca63c5dc58-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "553e625f-6202-4540-bd11-79ca63c5dc58" (UID: "553e625f-6202-4540-bd11-79ca63c5dc58"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 16:10:54.704392 kubelet[2651]: I0516 16:10:54.704347 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704392 kubelet[2651]: I0516 16:10:54.704381 2651 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704392 kubelet[2651]: I0516 16:10:54.704392 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704392 kubelet[2651]: I0516 16:10:54.704401 2651 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704410 2651 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704418 2651 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704426 2651 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704433 2651 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxlb\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-kube-api-access-fdxlb\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704441 2651 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704450 2651 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nws8k\" (UniqueName: \"kubernetes.io/projected/1cb098ea-319d-4585-9fa7-59eeb96761ce-kube-api-access-nws8k\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704458 2651 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704604 kubelet[2651]: I0516 16:10:54.704487 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb098ea-319d-4585-9fa7-59eeb96761ce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704756 kubelet[2651]: I0516 16:10:54.704496 2651 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704756 kubelet[2651]: I0516 16:10:54.704504 2651 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/553e625f-6202-4540-bd11-79ca63c5dc58-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704756 kubelet[2651]: I0516 16:10:54.704512 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/553e625f-6202-4540-bd11-79ca63c5dc58-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 16:10:54.704756 kubelet[2651]: I0516 16:10:54.704521 2651 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/553e625f-6202-4540-bd11-79ca63c5dc58-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 16:10:55.281992 kubelet[2651]: I0516 16:10:55.281925 2651 scope.go:117] "RemoveContainer" containerID="7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102" May 16 16:10:55.284862 containerd[1531]: time="2025-05-16T16:10:55.284816718Z" level=info msg="RemoveContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\"" May 16 16:10:55.288942 systemd[1]: Removed slice kubepods-burstable-pod553e625f_6202_4540_bd11_79ca63c5dc58.slice - libcontainer container kubepods-burstable-pod553e625f_6202_4540_bd11_79ca63c5dc58.slice. May 16 16:10:55.289055 systemd[1]: kubepods-burstable-pod553e625f_6202_4540_bd11_79ca63c5dc58.slice: Consumed 6.615s CPU time, 122.8M memory peak, 2.6M read from disk, 16.1M written to disk. May 16 16:10:55.293358 systemd[1]: Removed slice kubepods-besteffort-pod1cb098ea_319d_4585_9fa7_59eeb96761ce.slice - libcontainer container kubepods-besteffort-pod1cb098ea_319d_4585_9fa7_59eeb96761ce.slice. May 16 16:10:55.297882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f7b9c6e78b848c68dd4c237dfd83d838fa26032bd7f482289ef849ba8480ae5-shm.mount: Deactivated successfully. May 16 16:10:55.297978 systemd[1]: var-lib-kubelet-pods-1cb098ea\x2d319d\x2d4585\x2d9fa7\x2d59eeb96761ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnws8k.mount: Deactivated successfully. May 16 16:10:55.298031 systemd[1]: var-lib-kubelet-pods-553e625f\x2d6202\x2d4540\x2dbd11\x2d79ca63c5dc58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfdxlb.mount: Deactivated successfully. May 16 16:10:55.298602 systemd[1]: var-lib-kubelet-pods-553e625f\x2d6202\x2d4540\x2dbd11\x2d79ca63c5dc58-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 16:10:55.298810 systemd[1]: var-lib-kubelet-pods-553e625f\x2d6202\x2d4540\x2dbd11\x2d79ca63c5dc58-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 16:10:55.303189 containerd[1531]: time="2025-05-16T16:10:55.303137017Z" level=info msg="RemoveContainer for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" returns successfully" May 16 16:10:55.303489 kubelet[2651]: I0516 16:10:55.303458 2651 scope.go:117] "RemoveContainer" containerID="0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d" May 16 16:10:55.306525 containerd[1531]: time="2025-05-16T16:10:55.305022729Z" level=info msg="RemoveContainer for \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\"" May 16 16:10:55.311751 containerd[1531]: time="2025-05-16T16:10:55.311714001Z" level=info msg="RemoveContainer for \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" returns successfully" May 16 16:10:55.311902 kubelet[2651]: I0516 16:10:55.311877 2651 scope.go:117] "RemoveContainer" containerID="7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed" May 16 16:10:55.315638 containerd[1531]: time="2025-05-16T16:10:55.315603983Z" level=info msg="RemoveContainer for \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\"" May 16 16:10:55.338434 containerd[1531]: time="2025-05-16T16:10:55.338385010Z" level=info msg="RemoveContainer for \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" returns successfully" May 16 16:10:55.338730 kubelet[2651]: I0516 16:10:55.338686 2651 scope.go:117] "RemoveContainer" containerID="91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b" May 16 16:10:55.341956 containerd[1531]: time="2025-05-16T16:10:55.341923481Z" level=info msg="RemoveContainer for \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\"" May 16 16:10:55.345271 containerd[1531]: time="2025-05-16T16:10:55.345175799Z" level=info msg="RemoveContainer for \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" returns successfully" May 16 16:10:55.345463 kubelet[2651]: I0516 16:10:55.345445 2651 scope.go:117] "RemoveContainer" containerID="5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc" May 16 16:10:55.347782 containerd[1531]: time="2025-05-16T16:10:55.347756935Z" level=info msg="RemoveContainer for \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\"" May 16 16:10:55.350196 containerd[1531]: time="2025-05-16T16:10:55.350167554Z" level=info msg="RemoveContainer for \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" returns successfully" May 16 16:10:55.350359 kubelet[2651]: I0516 16:10:55.350323 2651 scope.go:117] "RemoveContainer" containerID="7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102" May 16 16:10:55.350610 containerd[1531]: time="2025-05-16T16:10:55.350548624Z" level=error msg="ContainerStatus for \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\": not found" May 16 16:10:55.352425 kubelet[2651]: E0516 16:10:55.352391 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\": not found" containerID="7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102" May 16 16:10:55.352563 kubelet[2651]: I0516 16:10:55.352438 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102"} err="failed to get container status \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e55e113736989236eb78aa015106872f76aec438cfc2bb0ade44b13d2241102\": not found" May 16 16:10:55.352616 kubelet[2651]: I0516 16:10:55.352564 2651 scope.go:117] "RemoveContainer" containerID="0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d" May 16 16:10:55.352815 containerd[1531]: time="2025-05-16T16:10:55.352783288Z" level=error msg="ContainerStatus for \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\": not found" May 16 16:10:55.352998 kubelet[2651]: E0516 16:10:55.352973 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\": not found" containerID="0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d" May 16 16:10:55.353042 kubelet[2651]: I0516 16:10:55.353009 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d"} err="failed to get container status \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0561da43c4e024bc1a15d6aa050bdb3275598b35fce6ae5d8f1619403814b78d\": not found" May 16 16:10:55.353042 kubelet[2651]: I0516 16:10:55.353029 2651 scope.go:117] "RemoveContainer" containerID="7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed" May 16 16:10:55.353213 containerd[1531]: time="2025-05-16T16:10:55.353183278Z" level=error msg="ContainerStatus for \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\": not found" May 16 16:10:55.353295 kubelet[2651]: E0516 16:10:55.353278 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\": not found" containerID="7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed" May 16 16:10:55.353334 kubelet[2651]: I0516 16:10:55.353297 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed"} err="failed to get container status \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f4afa6eed5caa0905b089927f87e61b085d2a8238abc25c9dc8a86cb47b99ed\": not found" May 16 16:10:55.353334 kubelet[2651]: I0516 16:10:55.353311 2651 scope.go:117] "RemoveContainer" containerID="91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b" May 16 16:10:55.353572 containerd[1531]: time="2025-05-16T16:10:55.353462791Z" level=error msg="ContainerStatus for \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\": not found" May 16 16:10:55.353773 kubelet[2651]: E0516 16:10:55.353753 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\": not found" containerID="91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b" May 16 16:10:55.353817 kubelet[2651]: I0516 16:10:55.353777 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b"} err="failed to get container status \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"91d7547abd697978d8eb7bb27f954cd155221a7efdd234a8e797a1438239bb3b\": not found" May 16 16:10:55.353817 kubelet[2651]: I0516 16:10:55.353791 2651 scope.go:117] "RemoveContainer" containerID="5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc" May 16 16:10:55.354013 containerd[1531]: time="2025-05-16T16:10:55.353984578Z" level=error msg="ContainerStatus for \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\": not found" May 16 16:10:55.354116 kubelet[2651]: E0516 16:10:55.354097 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\": not found" containerID="5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc" May 16 16:10:55.354274 kubelet[2651]: I0516 16:10:55.354122 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc"} err="failed to get container status \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a71904058e31868cb13d026193b5818f0fad683d6ea889353c0af62cf2033fc\": not found" May 16 16:10:55.354274 kubelet[2651]: I0516 16:10:55.354147 2651 scope.go:117] "RemoveContainer" containerID="933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b" May 16 16:10:55.355500 containerd[1531]: time="2025-05-16T16:10:55.355439061Z" level=info msg="RemoveContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\"" May 16 16:10:55.364874 containerd[1531]: time="2025-05-16T16:10:55.364828985Z" level=info msg="RemoveContainer for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" returns successfully" May 16 16:10:55.365043 kubelet[2651]: I0516 16:10:55.365000 2651 scope.go:117] "RemoveContainer" containerID="933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b" May 16 16:10:55.365242 containerd[1531]: time="2025-05-16T16:10:55.365203296Z" level=error msg="ContainerStatus for \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\": not found" May 16 16:10:55.365340 kubelet[2651]: E0516 16:10:55.365306 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\": not found" containerID="933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b" May 16 16:10:55.365340 kubelet[2651]: I0516 16:10:55.365326 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b"} err="failed to get container status \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\": rpc error: code = NotFound desc = an error occurred when try to find container \"933c3ecfe22a6b1540ee37179485e7043e3fd4fde2201ecbd16f980cc036441b\": not found" May 16 16:10:56.076166 kubelet[2651]: I0516 16:10:56.076085 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb098ea-319d-4585-9fa7-59eeb96761ce" path="/var/lib/kubelet/pods/1cb098ea-319d-4585-9fa7-59eeb96761ce/volumes" May 16 16:10:56.076937 kubelet[2651]: I0516 16:10:56.076911 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" path="/var/lib/kubelet/pods/553e625f-6202-4540-bd11-79ca63c5dc58/volumes" May 16 16:10:56.214035 sshd[4234]: Connection closed by 10.0.0.1 port 33774 May 16 16:10:56.214674 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 16 16:10:56.227725 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:33774.service: Deactivated successfully. May 16 16:10:56.229769 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:10:56.230571 systemd[1]: session-22.scope: Consumed 1.310s CPU time, 24.8M memory peak. May 16 16:10:56.231127 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. May 16 16:10:56.233981 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:40560.service - OpenSSH per-connection server daemon (10.0.0.1:40560). May 16 16:10:56.234514 systemd-logind[1510]: Removed session 22. May 16 16:10:56.293082 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 40560 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:56.294521 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:56.298699 systemd-logind[1510]: New session 23 of user core. May 16 16:10:56.306610 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:10:57.255296 sshd[4387]: Connection closed by 10.0.0.1 port 40560 May 16 16:10:57.255635 sshd-session[4385]: pam_unix(sshd:session): session closed for user core May 16 16:10:57.266245 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:40560.service: Deactivated successfully. May 16 16:10:57.269002 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:10:57.269904 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274581 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="mount-bpf-fs" May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274611 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="cilium-agent" May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274618 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="mount-cgroup" May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274624 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="apply-sysctl-overwrites" May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274631 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1cb098ea-319d-4585-9fa7-59eeb96761ce" containerName="cilium-operator" May 16 16:10:57.274627 kubelet[2651]: E0516 16:10:57.274637 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="clean-cilium-state" May 16 16:10:57.275013 kubelet[2651]: I0516 16:10:57.274671 2651 memory_manager.go:354] "RemoveStaleState removing state" podUID="553e625f-6202-4540-bd11-79ca63c5dc58" containerName="cilium-agent" May 16 16:10:57.275013 kubelet[2651]: I0516 16:10:57.274677 2651 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb098ea-319d-4585-9fa7-59eeb96761ce" containerName="cilium-operator" May 16 16:10:57.276777 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:40574.service - OpenSSH per-connection server daemon (10.0.0.1:40574). May 16 16:10:57.279029 systemd-logind[1510]: Removed session 23. May 16 16:10:57.299572 systemd[1]: Created slice kubepods-burstable-podc292a767_6cf1_4b9a_8696_e66db00b101c.slice - libcontainer container kubepods-burstable-podc292a767_6cf1_4b9a_8696_e66db00b101c.slice. May 16 16:10:57.346732 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 40574 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:57.347986 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:57.353263 systemd-logind[1510]: New session 24 of user core. May 16 16:10:57.360606 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:10:57.410599 sshd[4401]: Connection closed by 10.0.0.1 port 40574 May 16 16:10:57.411164 sshd-session[4399]: pam_unix(sshd:session): session closed for user core May 16 16:10:57.417677 kubelet[2651]: I0516 16:10:57.417588 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-cilium-cgroup\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417677 kubelet[2651]: I0516 16:10:57.417630 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-cni-path\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417677 kubelet[2651]: I0516 16:10:57.417655 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-host-proc-sys-kernel\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417786 kubelet[2651]: I0516 16:10:57.417685 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-xtables-lock\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417786 kubelet[2651]: I0516 16:10:57.417702 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c292a767-6cf1-4b9a-8696-e66db00b101c-clustermesh-secrets\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417786 kubelet[2651]: I0516 16:10:57.417719 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-etc-cni-netd\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417868 kubelet[2651]: I0516 16:10:57.417795 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-bpf-maps\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417868 kubelet[2651]: I0516 16:10:57.417843 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c292a767-6cf1-4b9a-8696-e66db00b101c-hubble-tls\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417908 kubelet[2651]: I0516 16:10:57.417868 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c292a767-6cf1-4b9a-8696-e66db00b101c-cilium-config-path\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417929 kubelet[2651]: I0516 16:10:57.417919 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-cilium-run\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417974 kubelet[2651]: I0516 16:10:57.417955 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-lib-modules\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417999 kubelet[2651]: I0516 16:10:57.417978 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c292a767-6cf1-4b9a-8696-e66db00b101c-cilium-ipsec-secrets\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.417999 kubelet[2651]: I0516 16:10:57.417994 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-hostproc\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.418041 kubelet[2651]: I0516 16:10:57.418009 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c292a767-6cf1-4b9a-8696-e66db00b101c-host-proc-sys-net\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.418064 kubelet[2651]: I0516 16:10:57.418043 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbs8\" (UniqueName: \"kubernetes.io/projected/c292a767-6cf1-4b9a-8696-e66db00b101c-kube-api-access-fsbs8\") pod \"cilium-tm24f\" (UID: \"c292a767-6cf1-4b9a-8696-e66db00b101c\") " pod="kube-system/cilium-tm24f" May 16 16:10:57.422624 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:40574.service: Deactivated successfully. May 16 16:10:57.424265 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:10:57.425124 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. May 16 16:10:57.429745 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:40576.service - OpenSSH per-connection server daemon (10.0.0.1:40576). May 16 16:10:57.430257 systemd-logind[1510]: Removed session 24. May 16 16:10:57.476010 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 40576 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:57.477139 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:57.481686 systemd-logind[1510]: New session 25 of user core. May 16 16:10:57.489116 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 16:10:57.606960 kubelet[2651]: E0516 16:10:57.606678 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:57.607574 containerd[1531]: time="2025-05-16T16:10:57.607272438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm24f,Uid:c292a767-6cf1-4b9a-8696-e66db00b101c,Namespace:kube-system,Attempt:0,}" May 16 16:10:57.623084 containerd[1531]: time="2025-05-16T16:10:57.622959572Z" level=info msg="connecting to shim 162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:57.646651 systemd[1]: Started cri-containerd-162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994.scope - libcontainer container 162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994. May 16 16:10:57.675384 containerd[1531]: time="2025-05-16T16:10:57.675344177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm24f,Uid:c292a767-6cf1-4b9a-8696-e66db00b101c,Namespace:kube-system,Attempt:0,} returns sandbox id \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\"" May 16 16:10:57.676160 kubelet[2651]: E0516 16:10:57.676134 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:57.684395 containerd[1531]: time="2025-05-16T16:10:57.684325099Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:10:57.690025 containerd[1531]: time="2025-05-16T16:10:57.689972655Z" level=info msg="Container 035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:57.695063 containerd[1531]: time="2025-05-16T16:10:57.695021343Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\"" May 16 16:10:57.695850 containerd[1531]: time="2025-05-16T16:10:57.695820726Z" level=info msg="StartContainer for \"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\"" May 16 16:10:57.697756 containerd[1531]: time="2025-05-16T16:10:57.697722244Z" level=info msg="connecting to shim 035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" protocol=ttrpc version=3 May 16 16:10:57.717636 systemd[1]: Started cri-containerd-035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d.scope - libcontainer container 035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d. May 16 16:10:57.743910 containerd[1531]: time="2025-05-16T16:10:57.743876266Z" level=info msg="StartContainer for \"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\" returns successfully" May 16 16:10:57.763054 systemd[1]: cri-containerd-035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d.scope: Deactivated successfully. May 16 16:10:57.764551 containerd[1531]: time="2025-05-16T16:10:57.764515331Z" level=info msg="received exit event container_id:\"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\" id:\"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\" pid:4479 exited_at:{seconds:1747411857 nanos:764237937}" May 16 16:10:57.766858 containerd[1531]: time="2025-05-16T16:10:57.766822720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\" id:\"035dca399f21d74b9ec46c67f6b3a0b72acb6d5c77ce54f967f77b4fba2baa9d\" pid:4479 exited_at:{seconds:1747411857 nanos:764237937}" May 16 16:10:58.295992 kubelet[2651]: E0516 16:10:58.295816 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:58.298705 containerd[1531]: time="2025-05-16T16:10:58.298656515Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:10:58.308287 containerd[1531]: time="2025-05-16T16:10:58.307601331Z" level=info msg="Container 28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:58.312601 containerd[1531]: time="2025-05-16T16:10:58.312566469Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\"" May 16 16:10:58.313205 containerd[1531]: time="2025-05-16T16:10:58.313176977Z" level=info msg="StartContainer for \"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\"" May 16 16:10:58.314000 containerd[1531]: time="2025-05-16T16:10:58.313976440Z" level=info msg="connecting to shim 28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" protocol=ttrpc version=3 May 16 16:10:58.338651 systemd[1]: Started cri-containerd-28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038.scope - libcontainer container 28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038. May 16 16:10:58.362859 containerd[1531]: time="2025-05-16T16:10:58.362794276Z" level=info msg="StartContainer for \"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\" returns successfully" May 16 16:10:58.371896 systemd[1]: cri-containerd-28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038.scope: Deactivated successfully. May 16 16:10:58.373750 containerd[1531]: time="2025-05-16T16:10:58.373715492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\" id:\"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\" pid:4525 exited_at:{seconds:1747411858 nanos:373281261}" May 16 16:10:58.373959 containerd[1531]: time="2025-05-16T16:10:58.373855409Z" level=info msg="received exit event container_id:\"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\" id:\"28430f8db6bcde62c83b31edc8e3eb97738949a8078ca6c84f7b1e752736c038\" pid:4525 exited_at:{seconds:1747411858 nanos:373281261}" May 16 16:10:59.137821 kubelet[2651]: E0516 16:10:59.137769 2651 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 16:10:59.299830 kubelet[2651]: E0516 16:10:59.299802 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:59.302671 containerd[1531]: time="2025-05-16T16:10:59.302607380Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:10:59.313299 containerd[1531]: time="2025-05-16T16:10:59.313222577Z" level=info msg="Container 28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:59.319966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177913276.mount: Deactivated successfully. May 16 16:10:59.323291 containerd[1531]: time="2025-05-16T16:10:59.323218386Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\"" May 16 16:10:59.324921 containerd[1531]: time="2025-05-16T16:10:59.324871794Z" level=info msg="StartContainer for \"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\"" May 16 16:10:59.327698 containerd[1531]: time="2025-05-16T16:10:59.327669821Z" level=info msg="connecting to shim 28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" protocol=ttrpc version=3 May 16 16:10:59.352673 systemd[1]: Started cri-containerd-28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c.scope - libcontainer container 28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c. May 16 16:10:59.385354 containerd[1531]: time="2025-05-16T16:10:59.385307638Z" level=info msg="StartContainer for \"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\" returns successfully" May 16 16:10:59.385618 systemd[1]: cri-containerd-28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c.scope: Deactivated successfully. May 16 16:10:59.386537 containerd[1531]: time="2025-05-16T16:10:59.385667031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\" id:\"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\" pid:4568 exited_at:{seconds:1747411859 nanos:385381196}" May 16 16:10:59.386537 containerd[1531]: time="2025-05-16T16:10:59.385987825Z" level=info msg="received exit event container_id:\"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\" id:\"28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c\" pid:4568 exited_at:{seconds:1747411859 nanos:385381196}" May 16 16:10:59.404414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e7ee0e6904e9fc9b50045ab085b040f12ec385d486b51acb28e402cd3abf4c-rootfs.mount: Deactivated successfully. May 16 16:11:00.305059 kubelet[2651]: E0516 16:11:00.305006 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:00.310406 containerd[1531]: time="2025-05-16T16:11:00.310366084Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:11:00.339888 containerd[1531]: time="2025-05-16T16:11:00.339839161Z" level=info msg="Container 4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:00.346535 containerd[1531]: time="2025-05-16T16:11:00.346501443Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\"" May 16 16:11:00.347185 containerd[1531]: time="2025-05-16T16:11:00.347150511Z" level=info msg="StartContainer for \"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\"" May 16 16:11:00.348076 containerd[1531]: time="2025-05-16T16:11:00.348048855Z" level=info msg="connecting to shim 4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" protocol=ttrpc version=3 May 16 16:11:00.369674 systemd[1]: Started cri-containerd-4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e.scope - libcontainer container 4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e. May 16 16:11:00.390908 systemd[1]: cri-containerd-4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e.scope: Deactivated successfully. May 16 16:11:00.398294 containerd[1531]: time="2025-05-16T16:11:00.398263084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\" id:\"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\" pid:4607 exited_at:{seconds:1747411860 nanos:398043088}" May 16 16:11:00.398572 containerd[1531]: time="2025-05-16T16:11:00.398287484Z" level=info msg="received exit event container_id:\"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\" id:\"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\" pid:4607 exited_at:{seconds:1747411860 nanos:398043088}" May 16 16:11:00.398761 containerd[1531]: time="2025-05-16T16:11:00.398735996Z" level=info msg="StartContainer for \"4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e\" returns successfully" May 16 16:11:00.416223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4977d768414b85c6398177a4e9e8ab9b44bc0d77320c6104b6a57c136e5c3a0e-rootfs.mount: Deactivated successfully. May 16 16:11:01.309581 kubelet[2651]: E0516 16:11:01.309340 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:01.312063 containerd[1531]: time="2025-05-16T16:11:01.312011889Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:11:01.323104 containerd[1531]: time="2025-05-16T16:11:01.323054788Z" level=info msg="Container 64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:01.334881 containerd[1531]: time="2025-05-16T16:11:01.334840394Z" level=info msg="CreateContainer within sandbox \"162e6c718a76d036583a042b3feafa22ffacc9ebef55d201bf52720fd8dc0994\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\"" May 16 16:11:01.335583 containerd[1531]: time="2025-05-16T16:11:01.335345746Z" level=info msg="StartContainer for \"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\"" May 16 16:11:01.336747 containerd[1531]: time="2025-05-16T16:11:01.336442288Z" level=info msg="connecting to shim 64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f" address="unix:///run/containerd/s/e6bc3c0174f5388ae501a8061738f8778e230143c6e2787e17e537bd00ae18f0" protocol=ttrpc version=3 May 16 16:11:01.353641 systemd[1]: Started cri-containerd-64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f.scope - libcontainer container 64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f. May 16 16:11:01.392483 containerd[1531]: time="2025-05-16T16:11:01.392070816Z" level=info msg="StartContainer for \"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" returns successfully" May 16 16:11:01.446930 containerd[1531]: time="2025-05-16T16:11:01.446887357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" id:\"22390f07a6d337cc302ffe96146999056cdfad764c9144b72588cd691f6f5289\" pid:4678 exited_at:{seconds:1747411861 nanos:445559659}" May 16 16:11:01.690516 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 16 16:11:02.315429 kubelet[2651]: E0516 16:11:02.315399 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:03.608360 kubelet[2651]: E0516 16:11:03.608299 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:03.866570 containerd[1531]: time="2025-05-16T16:11:03.866378893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" id:\"fd0b9316b03704b5647593cac2594031d158dc356f0412ae30faa5bced5901c4\" pid:4964 exit_status:1 exited_at:{seconds:1747411863 nanos:865734022}" May 16 16:11:04.601123 systemd-networkd[1440]: lxc_health: Link UP May 16 16:11:04.601410 systemd-networkd[1440]: lxc_health: Gained carrier May 16 16:11:05.609003 kubelet[2651]: E0516 16:11:05.608962 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:05.638129 kubelet[2651]: I0516 16:11:05.638063 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tm24f" podStartSLOduration=8.638047436 podStartE2EDuration="8.638047436s" podCreationTimestamp="2025-05-16 16:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:11:02.334073287 +0000 UTC m=+78.334506870" watchObservedRunningTime="2025-05-16 16:11:05.638047436 +0000 UTC m=+81.638480939" May 16 16:11:05.992795 containerd[1531]: time="2025-05-16T16:11:05.992565269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" id:\"9f09030c2e15c899b74ff2afa4c095d33f7a0bfa03572ade47578faeef984d8e\" pid:5210 exited_at:{seconds:1747411865 nanos:992230313}" May 16 16:11:06.124700 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 16 16:11:06.322884 kubelet[2651]: E0516 16:11:06.322807 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:07.325021 kubelet[2651]: E0516 16:11:07.324967 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:08.074734 kubelet[2651]: E0516 16:11:08.074695 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:08.132118 containerd[1531]: time="2025-05-16T16:11:08.132073413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" id:\"f50d227fca00b61317e85edeb130952757f08b97790c5df7931338ff4c21a384\" pid:5243 exited_at:{seconds:1747411868 nanos:131358018}" May 16 16:11:10.230754 containerd[1531]: time="2025-05-16T16:11:10.230672218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64946ac151432a905f0691bde90df5a0e45427c6d65177ebdf7c5a4fc180730f\" id:\"7ad16b8720f236cc4c8f7502e3e9e6cd59538e96c2b1d14d700bdd3b753beb42\" pid:5266 exited_at:{seconds:1747411870 nanos:230350380}" May 16 16:11:10.247622 sshd[4410]: Connection closed by 10.0.0.1 port 40576 May 16 16:11:10.248106 sshd-session[4408]: pam_unix(sshd:session): session closed for user core May 16 16:11:10.251170 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. May 16 16:11:10.251263 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:40576.service: Deactivated successfully. May 16 16:11:10.252960 systemd[1]: session-25.scope: Deactivated successfully. May 16 16:11:10.255044 systemd-logind[1510]: Removed session 25.