May 14 18:00:23.808438 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 18:00:23.808460 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 18:00:23.808470 kernel: KASLR enabled May 14 18:00:23.808475 kernel: efi: EFI v2.7 by EDK II May 14 18:00:23.808480 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 14 18:00:23.808486 kernel: random: crng init done May 14 18:00:23.808492 kernel: secureboot: Secure boot disabled May 14 18:00:23.808498 kernel: ACPI: Early table checksum verification disabled May 14 18:00:23.808504 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 14 18:00:23.808511 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 18:00:23.808517 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808522 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808528 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808533 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808540 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808548 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808554 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808560 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808566 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:23.808572 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 18:00:23.808578 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 18:00:23.808584 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:23.808590 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 14 18:00:23.808596 kernel: Zone ranges: May 14 18:00:23.808602 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:23.808609 kernel: DMA32 empty May 14 18:00:23.808615 kernel: Normal empty May 14 18:00:23.808620 kernel: Device empty May 14 18:00:23.808626 kernel: Movable zone start for each node May 14 18:00:23.808632 kernel: Early memory node ranges May 14 18:00:23.808638 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 14 18:00:23.808648 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 14 18:00:23.808654 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 14 18:00:23.808660 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 14 18:00:23.808672 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 14 18:00:23.808678 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 14 18:00:23.808684 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 14 18:00:23.808692 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 14 18:00:23.808698 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 14 18:00:23.808704 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 18:00:23.808712 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 18:00:23.808719 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 18:00:23.808725 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 18:00:23.808733 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:23.808739 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 18:00:23.808746 kernel: psci: probing for conduit method from ACPI. May 14 18:00:23.808752 kernel: psci: PSCIv1.1 detected in firmware. May 14 18:00:23.808758 kernel: psci: Using standard PSCI v0.2 function IDs May 14 18:00:23.808765 kernel: psci: Trusted OS migration not required May 14 18:00:23.808771 kernel: psci: SMC Calling Convention v1.1 May 14 18:00:23.808778 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 18:00:23.808784 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 18:00:23.808790 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 18:00:23.808798 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 18:00:23.808805 kernel: Detected PIPT I-cache on CPU0 May 14 18:00:23.808811 kernel: CPU features: detected: GIC system register CPU interface May 14 18:00:23.808817 kernel: CPU features: detected: Spectre-v4 May 14 18:00:23.808823 kernel: CPU features: detected: Spectre-BHB May 14 18:00:23.808830 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 18:00:23.808836 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 18:00:23.808842 kernel: CPU features: detected: ARM erratum 1418040 May 14 18:00:23.808849 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 18:00:23.808855 kernel: alternatives: applying boot alternatives May 14 18:00:23.808862 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:00:23.808870 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:00:23.808877 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:00:23.808883 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:00:23.808890 kernel: Fallback order for Node 0: 0 May 14 18:00:23.808897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 18:00:23.808903 kernel: Policy zone: DMA May 14 18:00:23.808910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:00:23.808916 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 18:00:23.808922 kernel: software IO TLB: area num 4. May 14 18:00:23.808929 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 18:00:23.808936 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 14 18:00:23.808942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:00:23.808951 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:00:23.808958 kernel: rcu: RCU event tracing is enabled. May 14 18:00:23.808964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:00:23.808973 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:00:23.808980 kernel: Tracing variant of Tasks RCU enabled. May 14 18:00:23.808988 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:00:23.808995 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:00:23.809001 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:00:23.809008 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:00:23.809014 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 18:00:23.809020 kernel: GICv3: 256 SPIs implemented May 14 18:00:23.809028 kernel: GICv3: 0 Extended SPIs implemented May 14 18:00:23.809034 kernel: Root IRQ handler: gic_handle_irq May 14 18:00:23.809041 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 18:00:23.809047 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 18:00:23.809053 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 18:00:23.809060 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 18:00:23.809066 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 18:00:23.809073 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 18:00:23.809079 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 18:00:23.809086 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 18:00:23.809093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:00:23.809099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:23.809107 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 18:00:23.809114 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 18:00:23.809120 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 18:00:23.809127 kernel: arm-pv: using stolen time PV May 14 18:00:23.809133 kernel: Console: colour dummy device 80x25 May 14 18:00:23.809140 kernel: ACPI: Core revision 20240827 May 14 18:00:23.809147 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 18:00:23.809153 kernel: pid_max: default: 32768 minimum: 301 May 14 18:00:23.809160 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:00:23.809167 kernel: landlock: Up and running. May 14 18:00:23.809173 kernel: SELinux: Initializing. May 14 18:00:23.809180 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:00:23.809187 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:00:23.809193 kernel: rcu: Hierarchical SRCU implementation. May 14 18:00:23.809200 kernel: rcu: Max phase no-delay instances is 400. May 14 18:00:23.809206 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:00:23.809213 kernel: Remapping and enabling EFI services. May 14 18:00:23.809219 kernel: smp: Bringing up secondary CPUs ... May 14 18:00:23.809226 kernel: Detected PIPT I-cache on CPU1 May 14 18:00:23.809243 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 18:00:23.809250 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 18:00:23.809258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:23.809278 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 18:00:23.809285 kernel: Detected PIPT I-cache on CPU2 May 14 18:00:23.809326 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 18:00:23.809337 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 18:00:23.809347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:23.809353 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 18:00:23.809360 kernel: Detected PIPT I-cache on CPU3 May 14 18:00:23.809367 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 18:00:23.809374 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 18:00:23.809382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:23.809388 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 18:00:23.809395 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:00:23.809403 kernel: SMP: Total of 4 processors activated. May 14 18:00:23.809410 kernel: CPU: All CPU(s) started at EL1 May 14 18:00:23.809418 kernel: CPU features: detected: 32-bit EL0 Support May 14 18:00:23.809426 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 18:00:23.809433 kernel: CPU features: detected: Common not Private translations May 14 18:00:23.809440 kernel: CPU features: detected: CRC32 instructions May 14 18:00:23.809447 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 18:00:23.809454 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 18:00:23.809461 kernel: CPU features: detected: LSE atomic instructions May 14 18:00:23.809468 kernel: CPU features: detected: Privileged Access Never May 14 18:00:23.809475 kernel: CPU features: detected: RAS Extension Support May 14 18:00:23.809483 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 18:00:23.809490 kernel: alternatives: applying system-wide alternatives May 14 18:00:23.809497 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 18:00:23.809504 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 14 18:00:23.809511 kernel: devtmpfs: initialized May 14 18:00:23.809518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:00:23.809525 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:00:23.809535 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 18:00:23.809542 kernel: 0 pages in range for non-PLT usage May 14 18:00:23.809551 kernel: 508544 pages in range for PLT usage May 14 18:00:23.809557 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:00:23.809564 kernel: SMBIOS 3.0.0 present. May 14 18:00:23.809571 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 18:00:23.809578 kernel: DMI: Memory slots populated: 1/1 May 14 18:00:23.809584 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:00:23.809591 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 18:00:23.809598 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 18:00:23.809605 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 18:00:23.809613 kernel: audit: initializing netlink subsys (disabled) May 14 18:00:23.809620 kernel: audit: type=2000 audit(0.040:1): state=initialized audit_enabled=0 res=1 May 14 18:00:23.809627 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:00:23.809634 kernel: cpuidle: using governor menu May 14 18:00:23.809640 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 18:00:23.809650 kernel: ASID allocator initialised with 32768 entries May 14 18:00:23.809657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:00:23.809663 kernel: Serial: AMBA PL011 UART driver May 14 18:00:23.809670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:00:23.809679 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:00:23.809685 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 18:00:23.809692 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 18:00:23.809699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:00:23.809707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:00:23.809713 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 18:00:23.809720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 18:00:23.809727 kernel: ACPI: Added _OSI(Module Device) May 14 18:00:23.809733 kernel: ACPI: Added _OSI(Processor Device) May 14 18:00:23.809742 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:00:23.809749 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:00:23.809755 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:00:23.809762 kernel: ACPI: Interpreter enabled May 14 18:00:23.809769 kernel: ACPI: Using GIC for interrupt routing May 14 18:00:23.809775 kernel: ACPI: MCFG table detected, 1 entries May 14 18:00:23.809782 kernel: ACPI: CPU0 has been hot-added May 14 18:00:23.809789 kernel: ACPI: CPU1 has been hot-added May 14 18:00:23.809796 kernel: ACPI: CPU2 has been hot-added May 14 18:00:23.809803 kernel: ACPI: CPU3 has been hot-added May 14 18:00:23.809811 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 18:00:23.809818 kernel: printk: legacy console [ttyAMA0] enabled May 14 18:00:23.809825 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:00:23.809964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:00:23.810038 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 18:00:23.810099 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 18:00:23.810156 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 18:00:23.810216 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 18:00:23.810225 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 18:00:23.810232 kernel: PCI host bridge to bus 0000:00 May 14 18:00:23.810359 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 18:00:23.810423 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 18:00:23.810479 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 18:00:23.810532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:00:23.810616 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 18:00:23.810694 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:00:23.810772 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 18:00:23.810832 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 18:00:23.810892 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 18:00:23.810951 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 18:00:23.811014 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 18:00:23.811078 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 18:00:23.811134 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 18:00:23.811187 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 18:00:23.811246 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 18:00:23.811256 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 18:00:23.811263 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 18:00:23.811270 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 18:00:23.811278 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 18:00:23.811285 kernel: iommu: Default domain type: Translated May 14 18:00:23.811339 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 18:00:23.811347 kernel: efivars: Registered efivars operations May 14 18:00:23.811354 kernel: vgaarb: loaded May 14 18:00:23.811361 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 18:00:23.811368 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:00:23.811375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:00:23.811381 kernel: pnp: PnP ACPI init May 14 18:00:23.811472 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 18:00:23.811483 kernel: pnp: PnP ACPI: found 1 devices May 14 18:00:23.811490 kernel: NET: Registered PF_INET protocol family May 14 18:00:23.811497 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:00:23.811504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:00:23.811511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:00:23.811519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:00:23.811526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:00:23.811535 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:00:23.811542 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:00:23.811549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:00:23.811556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:00:23.811567 kernel: PCI: CLS 0 bytes, default 64 May 14 18:00:23.811575 kernel: kvm [1]: HYP mode not available May 14 18:00:23.811582 kernel: Initialise system trusted keyrings May 14 18:00:23.811589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:00:23.811596 kernel: Key type asymmetric registered May 14 18:00:23.811604 kernel: Asymmetric key parser 'x509' registered May 14 18:00:23.811612 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 18:00:23.811619 kernel: io scheduler mq-deadline registered May 14 18:00:23.811626 kernel: io scheduler kyber registered May 14 18:00:23.811633 kernel: io scheduler bfq registered May 14 18:00:23.811643 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 18:00:23.811650 kernel: ACPI: button: Power Button [PWRB] May 14 18:00:23.811657 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 18:00:23.811724 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 18:00:23.811735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:00:23.811742 kernel: thunder_xcv, ver 1.0 May 14 18:00:23.811749 kernel: thunder_bgx, ver 1.0 May 14 18:00:23.811756 kernel: nicpf, ver 1.0 May 14 18:00:23.811763 kernel: nicvf, ver 1.0 May 14 18:00:23.811832 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 18:00:23.811890 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T18:00:23 UTC (1747245623) May 14 18:00:23.811899 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 18:00:23.811908 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 18:00:23.811915 kernel: watchdog: NMI not fully supported May 14 18:00:23.811922 kernel: watchdog: Hard watchdog permanently disabled May 14 18:00:23.811929 kernel: NET: Registered PF_INET6 protocol family May 14 18:00:23.811937 kernel: Segment Routing with IPv6 May 14 18:00:23.811944 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:00:23.811951 kernel: NET: Registered PF_PACKET protocol family May 14 18:00:23.811958 kernel: Key type dns_resolver registered May 14 18:00:23.811964 kernel: registered taskstats version 1 May 14 18:00:23.811971 kernel: Loading compiled-in X.509 certificates May 14 18:00:23.811980 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 18:00:23.811987 kernel: Demotion targets for Node 0: null May 14 18:00:23.811994 kernel: Key type .fscrypt registered May 14 18:00:23.812001 kernel: Key type fscrypt-provisioning registered May 14 18:00:23.812008 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:00:23.812020 kernel: ima: Allocated hash algorithm: sha1 May 14 18:00:23.812027 kernel: ima: No architecture policies found May 14 18:00:23.812034 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 18:00:23.812043 kernel: clk: Disabling unused clocks May 14 18:00:23.812050 kernel: PM: genpd: Disabling unused power domains May 14 18:00:23.812057 kernel: Warning: unable to open an initial console. May 14 18:00:23.812064 kernel: Freeing unused kernel memory: 39424K May 14 18:00:23.812071 kernel: Run /init as init process May 14 18:00:23.812078 kernel: with arguments: May 14 18:00:23.812085 kernel: /init May 14 18:00:23.812092 kernel: with environment: May 14 18:00:23.812099 kernel: HOME=/ May 14 18:00:23.812107 kernel: TERM=linux May 14 18:00:23.812114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:00:23.812122 systemd[1]: Successfully made /usr/ read-only. May 14 18:00:23.812132 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:00:23.812141 systemd[1]: Detected virtualization kvm. May 14 18:00:23.812148 systemd[1]: Detected architecture arm64. May 14 18:00:23.812155 systemd[1]: Running in initrd. May 14 18:00:23.812163 systemd[1]: No hostname configured, using default hostname. May 14 18:00:23.812172 systemd[1]: Hostname set to . May 14 18:00:23.812180 systemd[1]: Initializing machine ID from VM UUID. May 14 18:00:23.812187 systemd[1]: Queued start job for default target initrd.target. May 14 18:00:23.812194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:23.812202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:23.812210 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:00:23.812218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:00:23.812226 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:00:23.812240 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:00:23.812251 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:00:23.812259 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:00:23.812268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:23.812275 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:23.812283 systemd[1]: Reached target paths.target - Path Units. May 14 18:00:23.812304 systemd[1]: Reached target slices.target - Slice Units. May 14 18:00:23.812312 systemd[1]: Reached target swap.target - Swaps. May 14 18:00:23.812319 systemd[1]: Reached target timers.target - Timer Units. May 14 18:00:23.812327 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:00:23.812337 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:00:23.812345 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:00:23.812353 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:00:23.812360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:23.812368 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:00:23.812377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:23.812385 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:00:23.812393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:00:23.812401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:00:23.812408 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:00:23.812416 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:00:23.812424 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:00:23.812432 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:00:23.812441 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:00:23.812449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:23.812456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:00:23.812464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:23.812472 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:00:23.812481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:00:23.812508 systemd-journald[244]: Collecting audit messages is disabled. May 14 18:00:23.812528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:23.812537 systemd-journald[244]: Journal started May 14 18:00:23.812558 systemd-journald[244]: Runtime Journal (/run/log/journal/4414168253964a0091523e915f6f65ab) is 6M, max 48.5M, 42.4M free. May 14 18:00:23.820383 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:00:23.805475 systemd-modules-load[245]: Inserted module 'overlay' May 14 18:00:23.825342 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:00:23.825389 kernel: Bridge firewalling registered May 14 18:00:23.825409 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:00:23.823745 systemd-modules-load[245]: Inserted module 'br_netfilter' May 14 18:00:23.828204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:00:23.829603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:00:23.834189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:00:23.837234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:00:23.849737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:00:23.852960 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:00:23.857179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:00:23.859487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:23.860219 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:00:23.861487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:23.868435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:23.872075 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:00:23.880884 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:00:23.915848 systemd-resolved[294]: Positive Trust Anchors: May 14 18:00:23.915866 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:00:23.915899 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:00:23.920847 systemd-resolved[294]: Defaulting to hostname 'linux'. May 14 18:00:23.921841 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:00:23.925915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:23.961322 kernel: SCSI subsystem initialized May 14 18:00:23.968315 kernel: Loading iSCSI transport class v2.0-870. May 14 18:00:23.977316 kernel: iscsi: registered transport (tcp) May 14 18:00:23.990313 kernel: iscsi: registered transport (qla4xxx) May 14 18:00:23.990334 kernel: QLogic iSCSI HBA Driver May 14 18:00:24.007309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:00:24.026027 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:24.027701 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:00:24.074524 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:00:24.076949 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:00:24.136317 kernel: raid6: neonx8 gen() 15675 MB/s May 14 18:00:24.153305 kernel: raid6: neonx4 gen() 15717 MB/s May 14 18:00:24.170306 kernel: raid6: neonx2 gen() 13136 MB/s May 14 18:00:24.187304 kernel: raid6: neonx1 gen() 10376 MB/s May 14 18:00:24.204306 kernel: raid6: int64x8 gen() 6854 MB/s May 14 18:00:24.221312 kernel: raid6: int64x4 gen() 7324 MB/s May 14 18:00:24.238306 kernel: raid6: int64x2 gen() 6077 MB/s May 14 18:00:24.255316 kernel: raid6: int64x1 gen() 5049 MB/s May 14 18:00:24.255345 kernel: raid6: using algorithm neonx4 gen() 15717 MB/s May 14 18:00:24.272315 kernel: raid6: .... xor() 12312 MB/s, rmw enabled May 14 18:00:24.272327 kernel: raid6: using neon recovery algorithm May 14 18:00:24.277312 kernel: xor: measuring software checksum speed May 14 18:00:24.277332 kernel: 8regs : 21596 MB/sec May 14 18:00:24.278690 kernel: 32regs : 19119 MB/sec May 14 18:00:24.278701 kernel: arm64_neon : 27946 MB/sec May 14 18:00:24.278710 kernel: xor: using function: arm64_neon (27946 MB/sec) May 14 18:00:24.334337 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:00:24.342336 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:00:24.344965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:24.373870 systemd-udevd[500]: Using default interface naming scheme 'v255'. May 14 18:00:24.377986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:24.380524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:00:24.404942 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation May 14 18:00:24.429496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:00:24.432087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:00:24.486346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:24.489223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:00:24.531969 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 18:00:24.538695 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:00:24.538815 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:00:24.538829 kernel: GPT:9289727 != 19775487 May 14 18:00:24.538842 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:00:24.538853 kernel: GPT:9289727 != 19775487 May 14 18:00:24.538861 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:00:24.538871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:24.540541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:00:24.540667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:24.553961 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:24.555994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:24.580931 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:00:24.588695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:00:24.596326 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:00:24.597699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:24.611951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:00:24.618429 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:00:24.619421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:00:24.622286 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:00:24.624310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:24.626194 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:00:24.628979 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:00:24.630925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:00:24.661116 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:00:24.738769 disk-uuid[593]: Primary Header is updated. May 14 18:00:24.738769 disk-uuid[593]: Secondary Entries is updated. May 14 18:00:24.738769 disk-uuid[593]: Secondary Header is updated. May 14 18:00:24.742317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:25.763305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:25.763362 disk-uuid[601]: The operation has completed successfully. May 14 18:00:25.795117 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:00:25.796117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:00:25.817055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:00:25.840431 sh[613]: Success May 14 18:00:25.859071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:00:25.859115 kernel: device-mapper: uevent: version 1.0.3 May 14 18:00:25.860421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:00:25.874770 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 18:00:25.901604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:00:25.904316 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:00:25.917813 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:00:25.926479 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:00:25.926625 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) May 14 18:00:25.928736 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 18:00:25.928765 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:25.928775 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:00:25.936059 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:00:25.937541 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:00:25.939065 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:00:25.939943 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:00:25.941743 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:00:25.971324 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (655) May 14 18:00:25.971383 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:25.972885 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:25.972925 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:25.982314 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:25.984642 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:00:25.987720 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:00:26.066481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:00:26.070654 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:00:26.121500 systemd-networkd[800]: lo: Link UP May 14 18:00:26.121512 systemd-networkd[800]: lo: Gained carrier May 14 18:00:26.122245 systemd-networkd[800]: Enumeration completed May 14 18:00:26.122362 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:00:26.123136 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:26.123139 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:00:26.124080 systemd[1]: Reached target network.target - Network. May 14 18:00:26.124539 systemd-networkd[800]: eth0: Link UP May 14 18:00:26.124543 systemd-networkd[800]: eth0: Gained carrier May 14 18:00:26.124553 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:26.145636 ignition[697]: Ignition 2.21.0 May 14 18:00:26.145653 ignition[697]: Stage: fetch-offline May 14 18:00:26.145685 ignition[697]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:26.145693 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:26.147163 ignition[697]: parsed url from cmdline: "" May 14 18:00:26.148366 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:00:26.147170 ignition[697]: no config URL provided May 14 18:00:26.147178 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:00:26.147194 ignition[697]: no config at "/usr/lib/ignition/user.ign" May 14 18:00:26.147219 ignition[697]: op(1): [started] loading QEMU firmware config module May 14 18:00:26.147224 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:00:26.158678 ignition[697]: op(1): [finished] loading QEMU firmware config module May 14 18:00:26.196192 ignition[697]: parsing config with SHA512: aeb7848156319ce0badc2b267ea6a85af15567d873fc12e0b69707e939c4d5b3889ee0b8ec07fc1f628457b1bf04b35c0ac930ba29fd1830d63ed8ab10ad8554 May 14 18:00:26.200728 unknown[697]: fetched base config from "system" May 14 18:00:26.200741 unknown[697]: fetched user config from "qemu" May 14 18:00:26.201164 ignition[697]: fetch-offline: fetch-offline passed May 14 18:00:26.203061 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:00:26.201236 ignition[697]: Ignition finished successfully May 14 18:00:26.204747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:00:26.205673 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:00:26.234323 ignition[813]: Ignition 2.21.0 May 14 18:00:26.234339 ignition[813]: Stage: kargs May 14 18:00:26.234493 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:26.234503 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:26.235276 ignition[813]: kargs: kargs passed May 14 18:00:26.238269 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:00:26.235345 ignition[813]: Ignition finished successfully May 14 18:00:26.240256 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:00:26.263761 ignition[821]: Ignition 2.21.0 May 14 18:00:26.263781 ignition[821]: Stage: disks May 14 18:00:26.263928 ignition[821]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:26.263937 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:26.265442 ignition[821]: disks: disks passed May 14 18:00:26.267930 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:00:26.265502 ignition[821]: Ignition finished successfully May 14 18:00:26.269661 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:00:26.271392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:00:26.273147 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:00:26.275133 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:00:26.277122 systemd[1]: Reached target basic.target - Basic System. May 14 18:00:26.279836 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:00:26.310890 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:00:26.315518 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:00:26.317852 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:00:26.385327 kernel: EXT4-fs (vda9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 18:00:26.385974 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:00:26.387284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:00:26.391798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:00:26.394185 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:00:26.395231 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:00:26.395318 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:00:26.395348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:00:26.405950 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:00:26.408059 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:00:26.414410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 14 18:00:26.416467 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:26.416509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:26.416519 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:26.420534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:00:26.452964 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:00:26.457833 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 14 18:00:26.462377 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:00:26.466408 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:00:26.544353 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:00:26.546522 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:00:26.548145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:00:26.570472 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:26.584510 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:00:26.596998 ignition[954]: INFO : Ignition 2.21.0 May 14 18:00:26.596998 ignition[954]: INFO : Stage: mount May 14 18:00:26.598677 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:26.598677 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:26.601626 ignition[954]: INFO : mount: mount passed May 14 18:00:26.601626 ignition[954]: INFO : Ignition finished successfully May 14 18:00:26.603521 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:00:26.606463 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:00:26.925786 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:00:26.927381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:00:26.954707 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (966) May 14 18:00:26.954758 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:26.954769 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:26.956312 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:26.958627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:00:26.992688 ignition[983]: INFO : Ignition 2.21.0 May 14 18:00:26.992688 ignition[983]: INFO : Stage: files May 14 18:00:26.994439 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:26.994439 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:26.996695 ignition[983]: DEBUG : files: compiled without relabeling support, skipping May 14 18:00:26.997883 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:00:26.997883 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:00:27.000933 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:00:27.000933 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:00:27.000933 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:00:27.000933 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:00:27.000933 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 18:00:26.998921 unknown[983]: wrote ssh authorized keys file for user: core May 14 18:00:27.054162 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:00:27.186531 systemd-networkd[800]: eth0: Gained IPv6LL May 14 18:00:27.318902 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:00:27.318902 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:00:27.323040 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 18:00:27.765907 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:00:27.845954 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:00:27.848330 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 18:00:27.861771 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 18:00:28.117355 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:00:28.419471 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 18:00:28.419471 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:00:28.423088 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 18:00:28.425168 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:00:28.442124 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:00:28.445747 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:00:28.448449 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:00:28.448449 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 18:00:28.448449 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:00:28.448449 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:00:28.448449 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:00:28.448449 ignition[983]: INFO : files: files passed May 14 18:00:28.448449 ignition[983]: INFO : Ignition finished successfully May 14 18:00:28.449132 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:00:28.452141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:00:28.454436 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:00:28.480535 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:00:28.481749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:00:28.484366 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:00:28.485712 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:28.485712 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:28.488837 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:28.488453 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:00:28.490195 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:00:28.493179 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:00:28.526384 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:00:28.526526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:00:28.528806 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:00:28.530758 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:00:28.532651 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:00:28.533594 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:00:28.548633 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:00:28.551263 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:00:28.570612 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:28.571915 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:28.573749 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:00:28.575287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:00:28.575450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:00:28.577628 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:00:28.579380 systemd[1]: Stopped target basic.target - Basic System. May 14 18:00:28.580952 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:00:28.582513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:00:28.584620 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:00:28.586690 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:00:28.588493 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:00:28.590169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:00:28.592183 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:00:28.594453 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:00:28.596368 systemd[1]: Stopped target swap.target - Swaps. May 14 18:00:28.597954 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:00:28.598088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:00:28.600446 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:28.602407 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:28.604390 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:00:28.605364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:28.606611 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:00:28.606743 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:00:28.609585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:00:28.609719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:00:28.611638 systemd[1]: Stopped target paths.target - Path Units. May 14 18:00:28.613185 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:00:28.614028 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:28.615397 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:00:28.617234 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:00:28.618923 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:00:28.619017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:00:28.620722 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:00:28.620800 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:00:28.622999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:00:28.623126 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:00:28.624851 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:00:28.624983 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:00:28.627359 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:00:28.629726 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:00:28.631419 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:00:28.631557 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:28.633514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:00:28.633617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:00:28.640115 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:00:28.640208 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:00:28.649974 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:00:28.654847 ignition[1040]: INFO : Ignition 2.21.0 May 14 18:00:28.654847 ignition[1040]: INFO : Stage: umount May 14 18:00:28.654847 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:28.654847 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:28.658695 ignition[1040]: INFO : umount: umount passed May 14 18:00:28.658695 ignition[1040]: INFO : Ignition finished successfully May 14 18:00:28.657715 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:00:28.657831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:00:28.660929 systemd[1]: Stopped target network.target - Network. May 14 18:00:28.661950 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:00:28.662029 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:00:28.663664 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:00:28.663714 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:00:28.665563 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:00:28.665622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:00:28.667368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:00:28.667414 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:00:28.669529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:00:28.671001 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:00:28.673000 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:00:28.673103 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:00:28.674795 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:00:28.674846 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:00:28.678188 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:00:28.679590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:00:28.683558 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:00:28.683788 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:00:28.683885 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:00:28.686659 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:00:28.687265 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:00:28.689402 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:00:28.689442 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:28.692091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:00:28.693103 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:00:28.693167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:00:28.699663 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:00:28.699725 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:28.702855 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:00:28.702911 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:00:28.704992 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:00:28.705045 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:28.708405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:28.714753 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:00:28.714819 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:00:28.726994 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:00:28.727103 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:00:28.729953 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:00:28.730110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:28.732615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:00:28.732656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:00:28.734131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:00:28.734166 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:28.735949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:00:28.736006 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:00:28.739005 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:00:28.739057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:00:28.741853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:00:28.741906 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:00:28.745533 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:00:28.746742 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:00:28.746802 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:28.749596 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:00:28.749646 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:28.754268 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:00:28.754340 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:00:28.757835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:00:28.757883 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:28.760313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:00:28.760415 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:28.764576 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:00:28.764623 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 18:00:28.764653 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:00:28.764685 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:00:28.765853 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:00:28.767373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:00:28.768753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:00:28.771145 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:00:28.790802 systemd[1]: Switching root. May 14 18:00:28.822446 systemd-journald[244]: Journal stopped May 14 18:00:29.612474 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 14 18:00:29.612523 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:00:29.612534 kernel: SELinux: policy capability open_perms=1 May 14 18:00:29.612548 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:00:29.612557 kernel: SELinux: policy capability always_check_network=0 May 14 18:00:29.612569 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:00:29.612580 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:00:29.612590 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:00:29.612599 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:00:29.612609 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:00:29.612618 kernel: audit: type=1403 audit(1747245628.996:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:00:29.612628 systemd[1]: Successfully loaded SELinux policy in 39.620ms. May 14 18:00:29.612649 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.597ms. May 14 18:00:29.612660 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:00:29.612670 systemd[1]: Detected virtualization kvm. May 14 18:00:29.612680 systemd[1]: Detected architecture arm64. May 14 18:00:29.612691 systemd[1]: Detected first boot. May 14 18:00:29.612701 systemd[1]: Initializing machine ID from VM UUID. May 14 18:00:29.612711 zram_generator::config[1086]: No configuration found. May 14 18:00:29.612721 kernel: NET: Registered PF_VSOCK protocol family May 14 18:00:29.612730 systemd[1]: Populated /etc with preset unit settings. May 14 18:00:29.612741 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:00:29.612751 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:00:29.612761 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:00:29.612772 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:00:29.612783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:00:29.612794 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:00:29.612804 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:00:29.612832 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:00:29.612842 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:00:29.612852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:00:29.612862 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:00:29.612871 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:00:29.612882 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:29.612893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:29.612902 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:00:29.612912 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:00:29.612922 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:00:29.612932 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:00:29.612946 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 18:00:29.612962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:29.612976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:29.612991 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:00:29.613006 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:00:29.613019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:00:29.613032 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:00:29.613045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:29.613061 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:00:29.613087 systemd[1]: Reached target slices.target - Slice Units. May 14 18:00:29.613097 systemd[1]: Reached target swap.target - Swaps. May 14 18:00:29.613108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:00:29.613118 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:00:29.613129 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:00:29.613148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:29.613160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:00:29.613172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:29.613183 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:00:29.613192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:00:29.613202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:00:29.613220 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:00:29.613230 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:00:29.613239 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:00:29.613250 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:00:29.613260 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:00:29.613270 systemd[1]: Reached target machines.target - Containers. May 14 18:00:29.613280 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:00:29.613299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:29.613313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:00:29.613323 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:00:29.613333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:29.613343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:00:29.613353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:29.613363 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:00:29.613373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:29.613383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:00:29.613393 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:00:29.613404 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:00:29.613414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:00:29.613424 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:00:29.613434 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:29.613444 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:00:29.613454 kernel: loop: module loaded May 14 18:00:29.613464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:00:29.613474 kernel: fuse: init (API version 7.41) May 14 18:00:29.613484 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:00:29.613495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:00:29.613506 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:00:29.613517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:00:29.613527 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:00:29.613538 systemd[1]: Stopped verity-setup.service. May 14 18:00:29.613548 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:00:29.613558 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:00:29.613567 kernel: ACPI: bus type drm_connector registered May 14 18:00:29.613599 systemd-journald[1154]: Collecting audit messages is disabled. May 14 18:00:29.613622 systemd-journald[1154]: Journal started May 14 18:00:29.613642 systemd-journald[1154]: Runtime Journal (/run/log/journal/4414168253964a0091523e915f6f65ab) is 6M, max 48.5M, 42.4M free. May 14 18:00:29.401683 systemd[1]: Queued start job for default target multi-user.target. May 14 18:00:29.427402 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:00:29.427802 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:00:29.617718 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:00:29.618464 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:00:29.619506 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:00:29.620420 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:00:29.621388 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:00:29.623381 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:00:29.624924 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:29.626543 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:00:29.626720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:00:29.628184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:29.628383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:29.629819 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:00:29.629987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:00:29.631400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:29.631556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:29.633040 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:00:29.633193 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:00:29.634802 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:29.634961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:29.636445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:00:29.637996 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:29.639593 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:00:29.641152 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:00:29.655132 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:00:29.657873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:00:29.660150 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:00:29.661407 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:00:29.661441 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:00:29.663455 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:00:29.669196 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:00:29.670508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:29.671898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:00:29.673999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:00:29.675258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:00:29.678461 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:00:29.679733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:00:29.683345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:00:29.684286 systemd-journald[1154]: Time spent on flushing to /var/log/journal/4414168253964a0091523e915f6f65ab is 17.118ms for 890 entries. May 14 18:00:29.684286 systemd-journald[1154]: System Journal (/var/log/journal/4414168253964a0091523e915f6f65ab) is 8M, max 195.6M, 187.6M free. May 14 18:00:29.708577 systemd-journald[1154]: Received client request to flush runtime journal. May 14 18:00:29.698541 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:00:29.705180 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:00:29.711821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:29.713566 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:00:29.715123 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:00:29.716751 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:00:29.719501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:00:29.722692 kernel: loop0: detected capacity change from 0 to 194096 May 14 18:00:29.725127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:29.727847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:00:29.732502 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 14 18:00:29.732752 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 14 18:00:29.733487 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:00:29.737900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:00:29.737649 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:00:29.746498 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:00:29.761324 kernel: loop1: detected capacity change from 0 to 107312 May 14 18:00:29.762559 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:00:29.782376 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:00:29.785206 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:00:29.796663 kernel: loop2: detected capacity change from 0 to 138376 May 14 18:00:29.807327 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 14 18:00:29.808596 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 14 18:00:29.813195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:29.822321 kernel: loop3: detected capacity change from 0 to 194096 May 14 18:00:29.829500 kernel: loop4: detected capacity change from 0 to 107312 May 14 18:00:29.838414 kernel: loop5: detected capacity change from 0 to 138376 May 14 18:00:29.847775 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:00:29.848175 (sd-merge)[1229]: Merged extensions into '/usr'. May 14 18:00:29.851815 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:00:29.851835 systemd[1]: Reloading... May 14 18:00:29.921335 zram_generator::config[1255]: No configuration found. May 14 18:00:29.976663 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:00:30.015231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:30.078676 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:00:30.078858 systemd[1]: Reloading finished in 226 ms. May 14 18:00:30.109944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:00:30.111518 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:00:30.122764 systemd[1]: Starting ensure-sysext.service... May 14 18:00:30.124824 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:00:30.141214 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:00:30.141249 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:00:30.141499 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:00:30.141691 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:00:30.142279 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:00:30.142497 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. May 14 18:00:30.142544 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. May 14 18:00:30.143554 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... May 14 18:00:30.143580 systemd[1]: Reloading... May 14 18:00:30.145328 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:00:30.145340 systemd-tmpfiles[1290]: Skipping /boot May 14 18:00:30.154031 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:00:30.154047 systemd-tmpfiles[1290]: Skipping /boot May 14 18:00:30.192331 zram_generator::config[1317]: No configuration found. May 14 18:00:30.255991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:30.318332 systemd[1]: Reloading finished in 174 ms. May 14 18:00:30.341958 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:00:30.347787 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:30.356320 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:30.358977 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:00:30.376357 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:00:30.379601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:00:30.383036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:30.386421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:00:30.392628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:30.394675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:30.397956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:30.404358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:30.405740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:30.405873 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:30.409597 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:00:30.414331 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:00:30.416429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:30.416580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:30.418158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:30.418333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:30.420139 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:30.420894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:30.430826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:30.432597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:30.432765 systemd-udevd[1358]: Using default interface naming scheme 'v255'. May 14 18:00:30.436069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:30.442703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:30.443909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:30.444106 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:30.445541 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:00:30.447884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:30.449322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:30.451140 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:30.451348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:30.456607 augenrules[1390]: No rules May 14 18:00:30.457568 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:00:30.459788 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:30.461324 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:30.462845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:30.462996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:30.467515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:30.469494 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:00:30.472381 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:00:30.474365 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:00:30.497331 systemd[1]: Finished ensure-sysext.service. May 14 18:00:30.508477 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:30.511638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:30.512949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:30.520675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:00:30.523385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:30.533011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:30.535165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:30.535230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:30.538517 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:00:30.541217 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:00:30.542477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:00:30.551779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:30.553318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:30.553482 augenrules[1434]: /sbin/augenrules: No change May 14 18:00:30.555017 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:00:30.555502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:00:30.557552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:30.561631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:30.561927 augenrules[1465]: No rules May 14 18:00:30.565732 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:30.565921 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:30.567199 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:30.567404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:30.582219 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 18:00:30.586716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:00:30.589364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:00:30.590496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:00:30.590567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:00:30.614019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:00:30.685469 systemd-resolved[1357]: Positive Trust Anchors: May 14 18:00:30.685488 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:00:30.685520 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:00:30.700978 systemd-resolved[1357]: Defaulting to hostname 'linux'. May 14 18:00:30.701479 systemd-networkd[1447]: lo: Link UP May 14 18:00:30.701483 systemd-networkd[1447]: lo: Gained carrier May 14 18:00:30.702825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:00:30.704554 systemd-networkd[1447]: Enumeration completed May 14 18:00:30.704756 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:00:30.705038 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:30.705042 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:00:30.705616 systemd-networkd[1447]: eth0: Link UP May 14 18:00:30.705728 systemd-networkd[1447]: eth0: Gained carrier May 14 18:00:30.705743 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:30.706576 systemd[1]: Reached target network.target - Network. May 14 18:00:30.708148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:30.714587 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:00:30.718111 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:00:30.719492 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:00:30.724617 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:00:30.729373 systemd-networkd[1447]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:00:30.729929 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 14 18:00:30.730943 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:00:30.730994 systemd-timesyncd[1454]: Initial clock synchronization to Wed 2025-05-14 18:00:30.948153 UTC. May 14 18:00:30.733728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:30.743029 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:00:30.780783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:30.782268 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:00:30.783435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:00:30.784682 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:00:30.786001 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:00:30.787144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:00:30.788371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:00:30.789716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:00:30.789754 systemd[1]: Reached target paths.target - Path Units. May 14 18:00:30.790649 systemd[1]: Reached target timers.target - Timer Units. May 14 18:00:30.792589 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:00:30.794985 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:00:30.798420 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:00:30.799774 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:00:30.801013 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:00:30.804225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:00:30.805843 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:00:30.807605 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:00:30.808754 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:00:30.809703 systemd[1]: Reached target basic.target - Basic System. May 14 18:00:30.810711 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:00:30.810745 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:00:30.811756 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:00:30.813813 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:00:30.815746 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:00:30.817831 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:00:30.819888 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:00:30.821036 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:00:30.822001 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:00:30.824185 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:00:30.827517 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:00:30.828875 jq[1509]: false May 14 18:00:30.829578 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:00:30.839439 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:00:30.841378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:00:30.841825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:00:30.841999 extend-filesystems[1510]: Found loop3 May 14 18:00:30.842403 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:00:30.843402 extend-filesystems[1510]: Found loop4 May 14 18:00:30.843402 extend-filesystems[1510]: Found loop5 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda May 14 18:00:30.843402 extend-filesystems[1510]: Found vda1 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda2 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda3 May 14 18:00:30.843402 extend-filesystems[1510]: Found usr May 14 18:00:30.843402 extend-filesystems[1510]: Found vda4 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda6 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda7 May 14 18:00:30.843402 extend-filesystems[1510]: Found vda9 May 14 18:00:30.843402 extend-filesystems[1510]: Checking size of /dev/vda9 May 14 18:00:30.844983 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:00:30.851825 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:00:30.853466 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:00:30.863624 jq[1526]: true May 14 18:00:30.853641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:00:30.853871 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:00:30.854022 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:00:30.856082 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:00:30.856251 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:00:30.870514 extend-filesystems[1510]: Resized partition /dev/vda9 May 14 18:00:30.877270 jq[1531]: true May 14 18:00:30.880959 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:00:30.885440 extend-filesystems[1541]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:00:30.897324 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:00:30.898502 update_engine[1523]: I20250514 18:00:30.897827 1523 main.cc:92] Flatcar Update Engine starting May 14 18:00:30.902826 tar[1530]: linux-arm64/helm May 14 18:00:30.923594 dbus-daemon[1507]: [system] SELinux support is enabled May 14 18:00:30.924314 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:00:30.927252 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:00:30.927287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:00:30.930464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:00:30.930495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:00:30.930873 update_engine[1523]: I20250514 18:00:30.930811 1523 update_check_scheduler.cc:74] Next update check in 2m33s May 14 18:00:30.933018 systemd[1]: Started update-engine.service - Update Engine. May 14 18:00:30.935969 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:00:30.946156 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) May 14 18:00:30.947569 systemd-logind[1517]: New seat seat0. May 14 18:00:30.948644 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:00:30.957190 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:00:30.972404 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:00:30.972404 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:00:30.972404 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:00:30.978267 extend-filesystems[1510]: Resized filesystem in /dev/vda9 May 14 18:00:30.975120 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:00:30.975383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:00:30.991532 bash[1563]: Updated "/home/core/.ssh/authorized_keys" May 14 18:00:30.997028 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:00:31.002187 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:00:31.016054 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:00:31.129575 containerd[1542]: time="2025-05-14T18:00:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:00:31.133862 containerd[1542]: time="2025-05-14T18:00:31.133810410Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.144733333Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.97µs" May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.144776187Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.144795538Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.144976730Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.144991932Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.145018309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.145084376Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:00:31.145337 containerd[1542]: time="2025-05-14T18:00:31.145096250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:00:31.145640 containerd[1542]: time="2025-05-14T18:00:31.145613241Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:00:31.145701 containerd[1542]: time="2025-05-14T18:00:31.145686458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:00:31.145750 containerd[1542]: time="2025-05-14T18:00:31.145737980Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:00:31.145794 containerd[1542]: time="2025-05-14T18:00:31.145783463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:00:31.145943 containerd[1542]: time="2025-05-14T18:00:31.145924348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:00:31.146271 containerd[1542]: time="2025-05-14T18:00:31.146243960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:00:31.146405 containerd[1542]: time="2025-05-14T18:00:31.146386900Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:00:31.146460 containerd[1542]: time="2025-05-14T18:00:31.146446557Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:00:31.146545 containerd[1542]: time="2025-05-14T18:00:31.146531935Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:00:31.146885 containerd[1542]: time="2025-05-14T18:00:31.146866626Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:00:31.147032 containerd[1542]: time="2025-05-14T18:00:31.147014332Z" level=info msg="metadata content store policy set" policy=shared May 14 18:00:31.154254 containerd[1542]: time="2025-05-14T18:00:31.154216011Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:00:31.154435 containerd[1542]: time="2025-05-14T18:00:31.154412035Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:00:31.154524 containerd[1542]: time="2025-05-14T18:00:31.154509615Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:00:31.154603 containerd[1542]: time="2025-05-14T18:00:31.154589323Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:00:31.154660 containerd[1542]: time="2025-05-14T18:00:31.154647748Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:00:31.154721 containerd[1542]: time="2025-05-14T18:00:31.154708022Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:00:31.154785 containerd[1542]: time="2025-05-14T18:00:31.154771500Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:00:31.154837 containerd[1542]: time="2025-05-14T18:00:31.154824995Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:00:31.154888 containerd[1542]: time="2025-05-14T18:00:31.154876887Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:00:31.154937 containerd[1542]: time="2025-05-14T18:00:31.154925328Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:00:31.154986 containerd[1542]: time="2025-05-14T18:00:31.154974221Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:00:31.155037 containerd[1542]: time="2025-05-14T18:00:31.155026113Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:00:31.155233 containerd[1542]: time="2025-05-14T18:00:31.155212646Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:00:31.155336 containerd[1542]: time="2025-05-14T18:00:31.155293709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:00:31.155402 containerd[1542]: time="2025-05-14T18:00:31.155388044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:00:31.155451 containerd[1542]: time="2025-05-14T18:00:31.155440100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:00:31.155503 containerd[1542]: time="2025-05-14T18:00:31.155492116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:00:31.155552 containerd[1542]: time="2025-05-14T18:00:31.155540721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:00:31.155634 containerd[1542]: time="2025-05-14T18:00:31.155619731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:00:31.155695 containerd[1542]: time="2025-05-14T18:00:31.155682511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:00:31.155746 containerd[1542]: time="2025-05-14T18:00:31.155735019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:00:31.155795 containerd[1542]: time="2025-05-14T18:00:31.155783460Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:00:31.155854 containerd[1542]: time="2025-05-14T18:00:31.155841063Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:00:31.156092 containerd[1542]: time="2025-05-14T18:00:31.156076900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:00:31.156158 containerd[1542]: time="2025-05-14T18:00:31.156146131Z" level=info msg="Start snapshots syncer" May 14 18:00:31.156234 containerd[1542]: time="2025-05-14T18:00:31.156220826Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:00:31.158380 containerd[1542]: time="2025-05-14T18:00:31.158320550Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:00:31.158882 containerd[1542]: time="2025-05-14T18:00:31.158856729Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:00:31.159084 containerd[1542]: time="2025-05-14T18:00:31.159052835Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159312255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159345124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159356957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159368872Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159381814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159393154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159404247Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159441554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159464644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:00:31.159521 containerd[1542]: time="2025-05-14T18:00:31.159478326Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:00:31.159774 containerd[1542]: time="2025-05-14T18:00:31.159756194Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:00:31.159904 containerd[1542]: time="2025-05-14T18:00:31.159885986Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:00:31.160034 containerd[1542]: time="2025-05-14T18:00:31.159997002Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:00:31.160096 containerd[1542]: time="2025-05-14T18:00:31.160082791Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160130574Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160148817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160170058Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160251492Z" level=info msg="runtime interface created" May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160256751Z" level=info msg="created NRI interface" May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160271583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160286333Z" level=info msg="Connect containerd service" May 14 18:00:31.160378 containerd[1542]: time="2025-05-14T18:00:31.160340855Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:00:31.161430 containerd[1542]: time="2025-05-14T18:00:31.161400639Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.277953617Z" level=info msg="Start subscribing containerd event" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278029052Z" level=info msg="Start recovering state" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278118127Z" level=info msg="Start event monitor" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278139698Z" level=info msg="Start cni network conf syncer for default" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278151408Z" level=info msg="Start streaming server" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278162090Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278169157Z" level=info msg="runtime interface starting up..." May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278175073Z" level=info msg="starting plugins..." May 14 18:00:31.278122 containerd[1542]: time="2025-05-14T18:00:31.278246934Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:00:31.278777 containerd[1542]: time="2025-05-14T18:00:31.278729001Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:00:31.278925 containerd[1542]: time="2025-05-14T18:00:31.278885212Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:00:31.280486 containerd[1542]: time="2025-05-14T18:00:31.280443704Z" level=info msg="containerd successfully booted in 0.151356s" May 14 18:00:31.280585 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:00:31.281917 tar[1530]: linux-arm64/LICENSE May 14 18:00:31.282316 tar[1530]: linux-arm64/README.md May 14 18:00:31.303990 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:00:31.695914 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:00:31.716331 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:00:31.719421 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:00:31.738421 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:00:31.738661 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:00:31.741691 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:00:31.763034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:00:31.766248 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:00:31.768806 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 18:00:31.770373 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:00:31.794469 systemd-networkd[1447]: eth0: Gained IPv6LL May 14 18:00:31.797006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:00:31.798943 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:00:31.801689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:00:31.828758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:31.831335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:00:31.848746 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:00:31.850377 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:00:31.852515 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:00:31.855557 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:00:32.357172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:32.359008 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:00:32.361215 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:32.366545 systemd[1]: Startup finished in 2.128s (kernel) + 5.363s (initrd) + 3.415s (userspace) = 10.907s. May 14 18:00:32.847599 kubelet[1635]: E0514 18:00:32.847481 1635 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:32.850116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:32.850276 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:32.850838 systemd[1]: kubelet.service: Consumed 818ms CPU time, 238.5M memory peak. May 14 18:00:37.181235 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:00:37.182656 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:41514.service - OpenSSH per-connection server daemon (10.0.0.1:41514). May 14 18:00:37.242719 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 41514 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:37.244774 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:37.251073 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:00:37.252296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:00:37.259213 systemd-logind[1517]: New session 1 of user core. May 14 18:00:37.276390 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:00:37.279262 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:00:37.299552 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:00:37.301997 systemd-logind[1517]: New session c1 of user core. May 14 18:00:37.414469 systemd[1653]: Queued start job for default target default.target. May 14 18:00:37.435376 systemd[1653]: Created slice app.slice - User Application Slice. May 14 18:00:37.435408 systemd[1653]: Reached target paths.target - Paths. May 14 18:00:37.435447 systemd[1653]: Reached target timers.target - Timers. May 14 18:00:37.436781 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:00:37.447033 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:00:37.447096 systemd[1653]: Reached target sockets.target - Sockets. May 14 18:00:37.447139 systemd[1653]: Reached target basic.target - Basic System. May 14 18:00:37.447171 systemd[1653]: Reached target default.target - Main User Target. May 14 18:00:37.447199 systemd[1653]: Startup finished in 138ms. May 14 18:00:37.447535 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:00:37.449089 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:00:37.510870 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:41522.service - OpenSSH per-connection server daemon (10.0.0.1:41522). May 14 18:00:37.558404 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 41522 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:37.559661 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:37.563729 systemd-logind[1517]: New session 2 of user core. May 14 18:00:37.570481 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:00:37.621045 sshd[1666]: Connection closed by 10.0.0.1 port 41522 May 14 18:00:37.621514 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 14 18:00:37.632367 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:41522.service: Deactivated successfully. May 14 18:00:37.634688 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:00:37.636116 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. May 14 18:00:37.637379 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:41536.service - OpenSSH per-connection server daemon (10.0.0.1:41536). May 14 18:00:37.638282 systemd-logind[1517]: Removed session 2. May 14 18:00:37.685420 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 41536 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:37.686757 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:37.692364 systemd-logind[1517]: New session 3 of user core. May 14 18:00:37.702451 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:00:37.751252 sshd[1674]: Connection closed by 10.0.0.1 port 41536 May 14 18:00:37.751716 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 14 18:00:37.760499 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:41536.service: Deactivated successfully. May 14 18:00:37.762289 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:00:37.762974 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. May 14 18:00:37.765718 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:41548.service - OpenSSH per-connection server daemon (10.0.0.1:41548). May 14 18:00:37.766356 systemd-logind[1517]: Removed session 3. May 14 18:00:37.826674 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 41548 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:37.828087 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:37.832375 systemd-logind[1517]: New session 4 of user core. May 14 18:00:37.839506 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:00:37.896101 sshd[1682]: Connection closed by 10.0.0.1 port 41548 May 14 18:00:37.896648 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 14 18:00:37.908181 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:41548.service: Deactivated successfully. May 14 18:00:37.910987 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:00:37.912976 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. May 14 18:00:37.916432 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:41552.service - OpenSSH per-connection server daemon (10.0.0.1:41552). May 14 18:00:37.917041 systemd-logind[1517]: Removed session 4. May 14 18:00:37.970403 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:37.971670 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:37.976385 systemd-logind[1517]: New session 5 of user core. May 14 18:00:37.984480 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:00:38.057185 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:00:38.057512 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:38.070272 sudo[1692]: pam_unix(sudo:session): session closed for user root May 14 18:00:38.072616 sshd[1691]: Connection closed by 10.0.0.1 port 41552 May 14 18:00:38.072484 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 14 18:00:38.088132 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:41552.service: Deactivated successfully. May 14 18:00:38.089784 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:00:38.090630 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. May 14 18:00:38.094232 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:41564.service - OpenSSH per-connection server daemon (10.0.0.1:41564). May 14 18:00:38.094844 systemd-logind[1517]: Removed session 5. May 14 18:00:38.156223 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:38.157647 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:38.162020 systemd-logind[1517]: New session 6 of user core. May 14 18:00:38.179478 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:00:38.231512 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:00:38.231809 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:38.296963 sudo[1702]: pam_unix(sudo:session): session closed for user root May 14 18:00:38.302348 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:00:38.302631 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:38.311874 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:38.353511 augenrules[1724]: No rules May 14 18:00:38.354649 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:38.354918 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:38.356239 sudo[1701]: pam_unix(sudo:session): session closed for user root May 14 18:00:38.358802 sshd[1700]: Connection closed by 10.0.0.1 port 41564 May 14 18:00:38.358987 sshd-session[1698]: pam_unix(sshd:session): session closed for user core May 14 18:00:38.366518 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:41564.service: Deactivated successfully. May 14 18:00:38.368366 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:00:38.369048 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. May 14 18:00:38.372741 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:41580.service - OpenSSH per-connection server daemon (10.0.0.1:41580). May 14 18:00:38.373903 systemd-logind[1517]: Removed session 6. May 14 18:00:38.422469 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 41580 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:38.423768 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:38.428564 systemd-logind[1517]: New session 7 of user core. May 14 18:00:38.435497 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:00:38.487908 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:00:38.488666 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:38.877210 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:00:38.900685 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:00:39.170441 dockerd[1757]: time="2025-05-14T18:00:39.170296275Z" level=info msg="Starting up" May 14 18:00:39.172284 dockerd[1757]: time="2025-05-14T18:00:39.172249375Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:00:39.197095 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2153421011-merged.mount: Deactivated successfully. May 14 18:00:39.215790 dockerd[1757]: time="2025-05-14T18:00:39.215753005Z" level=info msg="Loading containers: start." May 14 18:00:39.227242 kernel: Initializing XFRM netlink socket May 14 18:00:39.456420 systemd-networkd[1447]: docker0: Link UP May 14 18:00:39.460122 dockerd[1757]: time="2025-05-14T18:00:39.460035236Z" level=info msg="Loading containers: done." May 14 18:00:39.474859 dockerd[1757]: time="2025-05-14T18:00:39.474798307Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:00:39.475014 dockerd[1757]: time="2025-05-14T18:00:39.474900896Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:00:39.475067 dockerd[1757]: time="2025-05-14T18:00:39.475033038Z" level=info msg="Initializing buildkit" May 14 18:00:39.499042 dockerd[1757]: time="2025-05-14T18:00:39.498994566Z" level=info msg="Completed buildkit initialization" May 14 18:00:39.505339 dockerd[1757]: time="2025-05-14T18:00:39.505260909Z" level=info msg="Daemon has completed initialization" May 14 18:00:39.505508 dockerd[1757]: time="2025-05-14T18:00:39.505363094Z" level=info msg="API listen on /run/docker.sock" May 14 18:00:39.505527 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:00:40.195013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3724086601-merged.mount: Deactivated successfully. May 14 18:00:40.350368 containerd[1542]: time="2025-05-14T18:00:40.350327560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 18:00:41.203222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915387282.mount: Deactivated successfully. May 14 18:00:42.253349 containerd[1542]: time="2025-05-14T18:00:42.253277798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.253743 containerd[1542]: time="2025-05-14T18:00:42.253682955Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 18:00:42.254532 containerd[1542]: time="2025-05-14T18:00:42.254501642Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.256818 containerd[1542]: time="2025-05-14T18:00:42.256763899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.257858 containerd[1542]: time="2025-05-14T18:00:42.257816318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.907444702s" May 14 18:00:42.257858 containerd[1542]: time="2025-05-14T18:00:42.257847915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 18:00:42.274334 containerd[1542]: time="2025-05-14T18:00:42.274243269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 18:00:43.100617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:00:43.106053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:43.238407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:43.241915 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:43.282519 kubelet[2047]: E0514 18:00:43.282463 2047 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:43.286331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:43.286585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:43.287107 systemd[1]: kubelet.service: Consumed 139ms CPU time, 94.1M memory peak. May 14 18:00:43.951253 containerd[1542]: time="2025-05-14T18:00:43.951203615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.951689 containerd[1542]: time="2025-05-14T18:00:43.951657163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 18:00:43.952602 containerd[1542]: time="2025-05-14T18:00:43.952576726Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.955237 containerd[1542]: time="2025-05-14T18:00:43.955176431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.956899 containerd[1542]: time="2025-05-14T18:00:43.956866467Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.682525315s" May 14 18:00:43.956899 containerd[1542]: time="2025-05-14T18:00:43.956899608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 18:00:43.972413 containerd[1542]: time="2025-05-14T18:00:43.972352650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 18:00:45.043925 containerd[1542]: time="2025-05-14T18:00:45.043875846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.044868 containerd[1542]: time="2025-05-14T18:00:45.044598701Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 18:00:45.045553 containerd[1542]: time="2025-05-14T18:00:45.045515084Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.048107 containerd[1542]: time="2025-05-14T18:00:45.048063634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.048998 containerd[1542]: time="2025-05-14T18:00:45.048944629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.076534034s" May 14 18:00:45.048998 containerd[1542]: time="2025-05-14T18:00:45.048971100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 18:00:45.063451 containerd[1542]: time="2025-05-14T18:00:45.063423945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 18:00:46.176672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076504635.mount: Deactivated successfully. May 14 18:00:46.367311 containerd[1542]: time="2025-05-14T18:00:46.367059193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.367646 containerd[1542]: time="2025-05-14T18:00:46.367490368Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 18:00:46.368447 containerd[1542]: time="2025-05-14T18:00:46.368413057Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.370235 containerd[1542]: time="2025-05-14T18:00:46.370208654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.370913 containerd[1542]: time="2025-05-14T18:00:46.370693264Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.307234901s" May 14 18:00:46.370913 containerd[1542]: time="2025-05-14T18:00:46.370725903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 18:00:46.388188 containerd[1542]: time="2025-05-14T18:00:46.388159967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:00:47.133204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822769991.mount: Deactivated successfully. May 14 18:00:48.120309 containerd[1542]: time="2025-05-14T18:00:48.120251923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.120998 containerd[1542]: time="2025-05-14T18:00:48.120956173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 18:00:48.122281 containerd[1542]: time="2025-05-14T18:00:48.122235874Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.124753 containerd[1542]: time="2025-05-14T18:00:48.124720386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.125349 containerd[1542]: time="2025-05-14T18:00:48.125318700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.737099404s" May 14 18:00:48.125408 containerd[1542]: time="2025-05-14T18:00:48.125349106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 18:00:48.140563 containerd[1542]: time="2025-05-14T18:00:48.140530426Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 18:00:48.739170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406213014.mount: Deactivated successfully. May 14 18:00:48.744000 containerd[1542]: time="2025-05-14T18:00:48.743953858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.744472 containerd[1542]: time="2025-05-14T18:00:48.744439096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 18:00:48.745384 containerd[1542]: time="2025-05-14T18:00:48.745350125Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.747186 containerd[1542]: time="2025-05-14T18:00:48.747152970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.748058 containerd[1542]: time="2025-05-14T18:00:48.747910088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 607.347813ms" May 14 18:00:48.748058 containerd[1542]: time="2025-05-14T18:00:48.747938488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 18:00:48.763168 containerd[1542]: time="2025-05-14T18:00:48.763136816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:00:49.390790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560013323.mount: Deactivated successfully. May 14 18:00:51.274641 containerd[1542]: time="2025-05-14T18:00:51.274576289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:51.275163 containerd[1542]: time="2025-05-14T18:00:51.275113858Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 18:00:51.276041 containerd[1542]: time="2025-05-14T18:00:51.276008296Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:51.278507 containerd[1542]: time="2025-05-14T18:00:51.278482579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:51.280144 containerd[1542]: time="2025-05-14T18:00:51.280103660Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.516931153s" May 14 18:00:51.280144 containerd[1542]: time="2025-05-14T18:00:51.280138205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 18:00:53.536792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:00:53.538580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:53.659718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:53.663256 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:53.700986 kubelet[2314]: E0514 18:00:53.700940 2314 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:53.703480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:53.703693 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:53.705362 systemd[1]: kubelet.service: Consumed 128ms CPU time, 95.2M memory peak. May 14 18:00:55.219692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:55.220110 systemd[1]: kubelet.service: Consumed 128ms CPU time, 95.2M memory peak. May 14 18:00:55.222068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:55.237510 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit session-7.scope)... May 14 18:00:55.237605 systemd[1]: Reloading... May 14 18:00:55.300412 zram_generator::config[2372]: No configuration found. May 14 18:00:55.391757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:55.473829 systemd[1]: Reloading finished in 235 ms. May 14 18:00:55.526240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:00:55.526407 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:00:55.528359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:55.530085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:55.660692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:55.664143 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:00:55.701777 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:55.701777 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:00:55.701777 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:55.702102 kubelet[2413]: I0514 18:00:55.701953 2413 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:00:56.707869 kubelet[2413]: I0514 18:00:56.707818 2413 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:00:56.708895 kubelet[2413]: I0514 18:00:56.708207 2413 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:00:56.708895 kubelet[2413]: I0514 18:00:56.708474 2413 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:00:56.742722 kubelet[2413]: E0514 18:00:56.742695 2413 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.742722 kubelet[2413]: I0514 18:00:56.742851 2413 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:00:56.753351 kubelet[2413]: I0514 18:00:56.753317 2413 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:00:56.753784 kubelet[2413]: I0514 18:00:56.753747 2413 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:00:56.754786 kubelet[2413]: I0514 18:00:56.753776 2413 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:00:56.754927 kubelet[2413]: I0514 18:00:56.754814 2413 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:00:56.754927 kubelet[2413]: I0514 18:00:56.754826 2413 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:00:56.755500 kubelet[2413]: I0514 18:00:56.754985 2413 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:56.759232 kubelet[2413]: I0514 18:00:56.759206 2413 kubelet.go:400] "Attempting to sync node with API server" May 14 18:00:56.759232 kubelet[2413]: I0514 18:00:56.759231 2413 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:00:56.759825 kubelet[2413]: I0514 18:00:56.759663 2413 kubelet.go:312] "Adding apiserver pod source" May 14 18:00:56.759861 kubelet[2413]: I0514 18:00:56.759838 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:00:56.760422 kubelet[2413]: W0514 18:00:56.760373 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.760539 kubelet[2413]: E0514 18:00:56.760527 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.760626 kubelet[2413]: W0514 18:00:56.760575 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.760626 kubelet[2413]: E0514 18:00:56.760625 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.763770 kubelet[2413]: I0514 18:00:56.763695 2413 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:00:56.764157 kubelet[2413]: I0514 18:00:56.764140 2413 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:00:56.764278 kubelet[2413]: W0514 18:00:56.764266 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:00:56.765151 kubelet[2413]: I0514 18:00:56.765133 2413 server.go:1264] "Started kubelet" May 14 18:00:56.767256 kubelet[2413]: I0514 18:00:56.766797 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:00:56.767256 kubelet[2413]: E0514 18:00:56.766947 2413 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f76aae2d3d43f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:00:56.765092927 +0000 UTC m=+1.098067418,LastTimestamp:2025-05-14 18:00:56.765092927 +0000 UTC m=+1.098067418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:00:56.767256 kubelet[2413]: I0514 18:00:56.767133 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:00:56.767429 kubelet[2413]: I0514 18:00:56.767410 2413 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:00:56.767478 kubelet[2413]: I0514 18:00:56.767458 2413 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:00:56.769442 kubelet[2413]: I0514 18:00:56.769413 2413 server.go:455] "Adding debug handlers to kubelet server" May 14 18:00:56.769560 kubelet[2413]: E0514 18:00:56.769537 2413 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:00:56.769612 kubelet[2413]: E0514 18:00:56.769546 2413 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:00:56.769679 kubelet[2413]: I0514 18:00:56.769665 2413 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:00:56.769765 kubelet[2413]: I0514 18:00:56.769748 2413 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:00:56.770159 kubelet[2413]: W0514 18:00:56.770118 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.770252 kubelet[2413]: E0514 18:00:56.770240 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.770579 kubelet[2413]: I0514 18:00:56.770558 2413 reconciler.go:26] "Reconciler: start to sync state" May 14 18:00:56.771343 kubelet[2413]: E0514 18:00:56.771289 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" May 14 18:00:56.771656 kubelet[2413]: I0514 18:00:56.771606 2413 factory.go:221] Registration of the systemd container factory successfully May 14 18:00:56.771704 kubelet[2413]: I0514 18:00:56.771694 2413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:00:56.772721 kubelet[2413]: I0514 18:00:56.772659 2413 factory.go:221] Registration of the containerd container factory successfully May 14 18:00:56.780638 kubelet[2413]: I0514 18:00:56.780595 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:00:56.781592 kubelet[2413]: I0514 18:00:56.781562 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:00:56.781729 kubelet[2413]: I0514 18:00:56.781713 2413 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:00:56.781768 kubelet[2413]: I0514 18:00:56.781733 2413 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:00:56.781802 kubelet[2413]: E0514 18:00:56.781775 2413 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:00:56.785311 kubelet[2413]: W0514 18:00:56.785249 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.785311 kubelet[2413]: E0514 18:00:56.785316 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:56.785772 kubelet[2413]: I0514 18:00:56.785754 2413 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:00:56.785772 kubelet[2413]: I0514 18:00:56.785768 2413 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:00:56.785841 kubelet[2413]: I0514 18:00:56.785789 2413 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:56.789394 kubelet[2413]: I0514 18:00:56.789365 2413 policy_none.go:49] "None policy: Start" May 14 18:00:56.790018 kubelet[2413]: I0514 18:00:56.789982 2413 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:00:56.790018 kubelet[2413]: I0514 18:00:56.790009 2413 state_mem.go:35] "Initializing new in-memory state store" May 14 18:00:56.796824 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:00:56.811573 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:00:56.814435 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:00:56.833116 kubelet[2413]: I0514 18:00:56.833084 2413 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:00:56.833360 kubelet[2413]: I0514 18:00:56.833315 2413 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:00:56.833455 kubelet[2413]: I0514 18:00:56.833434 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:00:56.835494 kubelet[2413]: E0514 18:00:56.835466 2413 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:00:56.871693 kubelet[2413]: I0514 18:00:56.871663 2413 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 18:00:56.872201 kubelet[2413]: E0514 18:00:56.872162 2413 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" May 14 18:00:56.882358 kubelet[2413]: I0514 18:00:56.882317 2413 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 18:00:56.883472 kubelet[2413]: I0514 18:00:56.883277 2413 topology_manager.go:215] "Topology Admit Handler" podUID="e6b2665be44d7fd000281cb9df089c29" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 18:00:56.884599 kubelet[2413]: I0514 18:00:56.884493 2413 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 18:00:56.891210 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 18:00:56.918996 systemd[1]: Created slice kubepods-burstable-pode6b2665be44d7fd000281cb9df089c29.slice - libcontainer container kubepods-burstable-pode6b2665be44d7fd000281cb9df089c29.slice. May 14 18:00:56.939761 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 18:00:56.971420 kubelet[2413]: I0514 18:00:56.970784 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 18:00:56.971420 kubelet[2413]: I0514 18:00:56.970824 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:56.971420 kubelet[2413]: I0514 18:00:56.970844 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:56.971420 kubelet[2413]: I0514 18:00:56.970858 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:56.971420 kubelet[2413]: I0514 18:00:56.970874 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:56.971620 kubelet[2413]: I0514 18:00:56.970888 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:56.971620 kubelet[2413]: I0514 18:00:56.970906 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:56.972164 kubelet[2413]: I0514 18:00:56.972000 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:56.972233 kubelet[2413]: I0514 18:00:56.972175 2413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:56.972624 kubelet[2413]: E0514 18:00:56.972587 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" May 14 18:00:57.074095 kubelet[2413]: I0514 18:00:57.074065 2413 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 18:00:57.074431 kubelet[2413]: E0514 18:00:57.074407 2413 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" May 14 18:00:57.217411 containerd[1542]: time="2025-05-14T18:00:57.217355532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 18:00:57.239127 containerd[1542]: time="2025-05-14T18:00:57.239018874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6b2665be44d7fd000281cb9df089c29,Namespace:kube-system,Attempt:0,}" May 14 18:00:57.242618 containerd[1542]: time="2025-05-14T18:00:57.242587362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 18:00:57.373893 kubelet[2413]: E0514 18:00:57.373824 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" May 14 18:00:57.476224 kubelet[2413]: I0514 18:00:57.476182 2413 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 18:00:57.476738 kubelet[2413]: E0514 18:00:57.476712 2413 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" May 14 18:00:57.821137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370225067.mount: Deactivated successfully. May 14 18:00:57.823854 containerd[1542]: time="2025-05-14T18:00:57.823798537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:57.828343 containerd[1542]: time="2025-05-14T18:00:57.828274630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 18:00:57.829450 containerd[1542]: time="2025-05-14T18:00:57.829400099Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:57.830554 containerd[1542]: time="2025-05-14T18:00:57.830526489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:57.831659 containerd[1542]: time="2025-05-14T18:00:57.831626736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:00:57.832417 containerd[1542]: time="2025-05-14T18:00:57.832366600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:57.833254 containerd[1542]: time="2025-05-14T18:00:57.832813737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 613.289056ms" May 14 18:00:57.833382 kubelet[2413]: W0514 18:00:57.833330 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:57.833795 kubelet[2413]: E0514 18:00:57.833388 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:57.833889 containerd[1542]: time="2025-05-14T18:00:57.833759614Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:57.834346 containerd[1542]: time="2025-05-14T18:00:57.834310438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:00:57.837315 containerd[1542]: time="2025-05-14T18:00:57.837181579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 596.477524ms" May 14 18:00:57.837887 containerd[1542]: time="2025-05-14T18:00:57.837859150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 593.77917ms" May 14 18:00:57.848596 containerd[1542]: time="2025-05-14T18:00:57.848565735Z" level=info msg="connecting to shim 91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde" address="unix:///run/containerd/s/2d6818c55b3346c0227e79cd28fa6b3ee08a6e4533c3011b1b029b126982cbbb" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:57.857111 containerd[1542]: time="2025-05-14T18:00:57.856967858Z" level=info msg="connecting to shim bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63" address="unix:///run/containerd/s/79c1aff912a097f091c8ea82662c0f5179f1a6f3c110b1986433905430c40db4" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:57.858156 containerd[1542]: time="2025-05-14T18:00:57.858125113Z" level=info msg="connecting to shim b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d" address="unix:///run/containerd/s/b1a1dc372abe166dc7aa68987e2e862a260ba1db7965994220c9f5ce171adbc6" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:57.884454 systemd[1]: Started cri-containerd-b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d.scope - libcontainer container b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d. May 14 18:00:57.885520 systemd[1]: Started cri-containerd-bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63.scope - libcontainer container bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63. May 14 18:00:57.889551 systemd[1]: Started cri-containerd-91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde.scope - libcontainer container 91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde. May 14 18:00:57.931724 containerd[1542]: time="2025-05-14T18:00:57.931607776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6b2665be44d7fd000281cb9df089c29,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63\"" May 14 18:00:57.933422 containerd[1542]: time="2025-05-14T18:00:57.933386755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde\"" May 14 18:00:57.935088 containerd[1542]: time="2025-05-14T18:00:57.935054681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d\"" May 14 18:00:57.936628 containerd[1542]: time="2025-05-14T18:00:57.936581649Z" level=info msg="CreateContainer within sandbox \"91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:00:57.937054 containerd[1542]: time="2025-05-14T18:00:57.936612635Z" level=info msg="CreateContainer within sandbox \"bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:00:57.937515 kubelet[2413]: W0514 18:00:57.937441 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:57.937565 kubelet[2413]: E0514 18:00:57.937520 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:57.937801 containerd[1542]: time="2025-05-14T18:00:57.937772052Z" level=info msg="CreateContainer within sandbox \"b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:00:57.946768 containerd[1542]: time="2025-05-14T18:00:57.946730884Z" level=info msg="Container db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:57.948504 containerd[1542]: time="2025-05-14T18:00:57.948469229Z" level=info msg="Container 39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:57.949943 containerd[1542]: time="2025-05-14T18:00:57.949896432Z" level=info msg="Container 89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:57.955420 containerd[1542]: time="2025-05-14T18:00:57.955388782Z" level=info msg="CreateContainer within sandbox \"b22fc61b73bce179a7cc42733504afcada7746202c3d6df01e1d6af44576f20d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312\"" May 14 18:00:57.955877 containerd[1542]: time="2025-05-14T18:00:57.955842204Z" level=info msg="CreateContainer within sandbox \"bd4d9c202bbea258407ca9661b018abf1ccfcdc9a2801017ab356b3493f3fe63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88\"" May 14 18:00:57.955877 containerd[1542]: time="2025-05-14T18:00:57.956033245Z" level=info msg="StartContainer for \"39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312\"" May 14 18:00:57.955877 containerd[1542]: time="2025-05-14T18:00:57.956202268Z" level=info msg="StartContainer for \"db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88\"" May 14 18:00:57.957313 containerd[1542]: time="2025-05-14T18:00:57.957269407Z" level=info msg="connecting to shim db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88" address="unix:///run/containerd/s/79c1aff912a097f091c8ea82662c0f5179f1a6f3c110b1986433905430c40db4" protocol=ttrpc version=3 May 14 18:00:57.957419 containerd[1542]: time="2025-05-14T18:00:57.957377619Z" level=info msg="connecting to shim 39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312" address="unix:///run/containerd/s/b1a1dc372abe166dc7aa68987e2e862a260ba1db7965994220c9f5ce171adbc6" protocol=ttrpc version=3 May 14 18:00:57.957602 containerd[1542]: time="2025-05-14T18:00:57.957545440Z" level=info msg="CreateContainer within sandbox \"91f56b659e59fb7cb7f1618d189ab3cee1124f10f7c006441b0486c7d551ddde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed\"" May 14 18:00:57.958414 containerd[1542]: time="2025-05-14T18:00:57.958379743Z" level=info msg="StartContainer for \"89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed\"" May 14 18:00:57.959538 containerd[1542]: time="2025-05-14T18:00:57.959504291Z" level=info msg="connecting to shim 89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed" address="unix:///run/containerd/s/2d6818c55b3346c0227e79cd28fa6b3ee08a6e4533c3011b1b029b126982cbbb" protocol=ttrpc version=3 May 14 18:00:57.977455 systemd[1]: Started cri-containerd-db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88.scope - libcontainer container db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88. May 14 18:00:57.980874 systemd[1]: Started cri-containerd-39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312.scope - libcontainer container 39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312. May 14 18:00:57.981752 systemd[1]: Started cri-containerd-89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed.scope - libcontainer container 89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed. May 14 18:00:58.030355 containerd[1542]: time="2025-05-14T18:00:58.026071395Z" level=info msg="StartContainer for \"89455590d57db2ad26976b3ac8d2b089db6be71093ff5439c5b7d01d93e6efed\" returns successfully" May 14 18:00:58.030355 containerd[1542]: time="2025-05-14T18:00:58.026243562Z" level=info msg="StartContainer for \"db8878a68f04bc4c377090c0244552ea866d2a30339da2fb65d4b939eb284d88\" returns successfully" May 14 18:00:58.036440 containerd[1542]: time="2025-05-14T18:00:58.036398654Z" level=info msg="StartContainer for \"39c905783adda2388cbaa48153ffde3dcc2616b9e8788e68d04d0961db815312\" returns successfully" May 14 18:00:58.099508 kubelet[2413]: W0514 18:00:58.098545 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:58.099508 kubelet[2413]: E0514 18:00:58.098612 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:58.108269 kubelet[2413]: W0514 18:00:58.108207 2413 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:58.108269 kubelet[2413]: E0514 18:00:58.108264 2413 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused May 14 18:00:58.175601 kubelet[2413]: E0514 18:00:58.175274 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" May 14 18:00:58.278385 kubelet[2413]: I0514 18:00:58.278277 2413 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 18:00:59.880208 kubelet[2413]: E0514 18:00:59.880156 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:00:59.955900 kubelet[2413]: I0514 18:00:59.955851 2413 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 18:00:59.969783 kubelet[2413]: E0514 18:00:59.969721 2413 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:01:00.070198 kubelet[2413]: E0514 18:01:00.069893 2413 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:01:00.614872 kubelet[2413]: E0514 18:01:00.614817 2413 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 14 18:01:00.762877 kubelet[2413]: I0514 18:01:00.762828 2413 apiserver.go:52] "Watching apiserver" May 14 18:01:00.770578 kubelet[2413]: I0514 18:01:00.770547 2413 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:01:02.115097 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-7.scope)... May 14 18:01:02.115111 systemd[1]: Reloading... May 14 18:01:02.178571 zram_generator::config[2735]: No configuration found. May 14 18:01:02.243040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:01:02.338806 systemd[1]: Reloading finished in 223 ms. May 14 18:01:02.360964 kubelet[2413]: I0514 18:01:02.360931 2413 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:01:02.361234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:02.375124 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:01:02.376642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:02.376782 systemd[1]: kubelet.service: Consumed 1.341s CPU time, 115.4M memory peak. May 14 18:01:02.378702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:02.508858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:02.521782 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:01:02.570531 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:02.570531 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:01:02.570531 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:02.570886 kubelet[2774]: I0514 18:01:02.570571 2774 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:01:02.574542 kubelet[2774]: I0514 18:01:02.574506 2774 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:01:02.574542 kubelet[2774]: I0514 18:01:02.574533 2774 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:01:02.574733 kubelet[2774]: I0514 18:01:02.574715 2774 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:01:02.576081 kubelet[2774]: I0514 18:01:02.576060 2774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:01:02.577389 kubelet[2774]: I0514 18:01:02.577318 2774 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:01:02.582596 kubelet[2774]: I0514 18:01:02.582577 2774 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:01:02.582855 kubelet[2774]: I0514 18:01:02.582829 2774 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:01:02.583014 kubelet[2774]: I0514 18:01:02.582858 2774 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:01:02.583093 kubelet[2774]: I0514 18:01:02.583020 2774 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:01:02.583093 kubelet[2774]: I0514 18:01:02.583028 2774 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:01:02.583093 kubelet[2774]: I0514 18:01:02.583059 2774 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:02.583168 kubelet[2774]: I0514 18:01:02.583158 2774 kubelet.go:400] "Attempting to sync node with API server" May 14 18:01:02.583189 kubelet[2774]: I0514 18:01:02.583173 2774 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:01:02.583220 kubelet[2774]: I0514 18:01:02.583198 2774 kubelet.go:312] "Adding apiserver pod source" May 14 18:01:02.583220 kubelet[2774]: I0514 18:01:02.583213 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:01:02.583906 kubelet[2774]: I0514 18:01:02.583835 2774 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:01:02.583997 kubelet[2774]: I0514 18:01:02.583982 2774 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:01:02.586317 kubelet[2774]: I0514 18:01:02.584885 2774 server.go:1264] "Started kubelet" May 14 18:01:02.589262 kubelet[2774]: I0514 18:01:02.589227 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:01:02.594212 kubelet[2774]: I0514 18:01:02.594183 2774 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:01:02.596002 kubelet[2774]: I0514 18:01:02.595963 2774 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:01:02.596760 kubelet[2774]: I0514 18:01:02.596730 2774 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:01:02.596912 kubelet[2774]: I0514 18:01:02.596892 2774 reconciler.go:26] "Reconciler: start to sync state" May 14 18:01:02.597341 kubelet[2774]: I0514 18:01:02.597285 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:01:02.598176 kubelet[2774]: I0514 18:01:02.598142 2774 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:01:02.600544 kubelet[2774]: I0514 18:01:02.598795 2774 server.go:455] "Adding debug handlers to kubelet server" May 14 18:01:02.602851 kubelet[2774]: I0514 18:01:02.602819 2774 factory.go:221] Registration of the systemd container factory successfully May 14 18:01:02.602953 kubelet[2774]: I0514 18:01:02.602930 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:01:02.606301 kubelet[2774]: I0514 18:01:02.605355 2774 factory.go:221] Registration of the containerd container factory successfully May 14 18:01:02.613389 kubelet[2774]: E0514 18:01:02.613357 2774 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:01:02.617716 kubelet[2774]: I0514 18:01:02.617660 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:01:02.618713 kubelet[2774]: I0514 18:01:02.618685 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:01:02.618786 kubelet[2774]: I0514 18:01:02.618721 2774 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:01:02.618786 kubelet[2774]: I0514 18:01:02.618740 2774 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:01:02.618786 kubelet[2774]: E0514 18:01:02.618778 2774 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:01:02.641521 kubelet[2774]: I0514 18:01:02.641429 2774 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:01:02.641521 kubelet[2774]: I0514 18:01:02.641449 2774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:01:02.641521 kubelet[2774]: I0514 18:01:02.641470 2774 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:02.641670 kubelet[2774]: I0514 18:01:02.641607 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:01:02.641696 kubelet[2774]: I0514 18:01:02.641624 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:01:02.641696 kubelet[2774]: I0514 18:01:02.641693 2774 policy_none.go:49] "None policy: Start" May 14 18:01:02.643426 kubelet[2774]: I0514 18:01:02.643409 2774 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:01:02.643462 kubelet[2774]: I0514 18:01:02.643435 2774 state_mem.go:35] "Initializing new in-memory state store" May 14 18:01:02.643672 kubelet[2774]: I0514 18:01:02.643648 2774 state_mem.go:75] "Updated machine memory state" May 14 18:01:02.647572 kubelet[2774]: I0514 18:01:02.647543 2774 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:01:02.647749 kubelet[2774]: I0514 18:01:02.647712 2774 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:01:02.647841 kubelet[2774]: I0514 18:01:02.647830 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:01:02.696176 kubelet[2774]: I0514 18:01:02.696125 2774 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 18:01:02.701685 kubelet[2774]: I0514 18:01:02.701645 2774 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 18:01:02.701801 kubelet[2774]: I0514 18:01:02.701734 2774 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 18:01:02.719221 kubelet[2774]: I0514 18:01:02.719177 2774 topology_manager.go:215] "Topology Admit Handler" podUID="e6b2665be44d7fd000281cb9df089c29" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 18:01:02.719348 kubelet[2774]: I0514 18:01:02.719279 2774 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 18:01:02.719708 kubelet[2774]: I0514 18:01:02.719670 2774 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 18:01:02.726098 kubelet[2774]: E0514 18:01:02.726015 2774 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.897531 kubelet[2774]: I0514 18:01:02.897412 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.897531 kubelet[2774]: I0514 18:01:02.897455 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.897531 kubelet[2774]: I0514 18:01:02.897489 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.897531 kubelet[2774]: I0514 18:01:02.897507 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 18:01:02.897531 kubelet[2774]: I0514 18:01:02.897524 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.897710 kubelet[2774]: I0514 18:01:02.897546 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.897710 kubelet[2774]: I0514 18:01:02.897563 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6b2665be44d7fd000281cb9df089c29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6b2665be44d7fd000281cb9df089c29\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.897710 kubelet[2774]: I0514 18:01:02.897584 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.897710 kubelet[2774]: I0514 18:01:02.897598 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:03.178400 sudo[2808]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:01:03.178769 sudo[2808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:01:03.583668 kubelet[2774]: I0514 18:01:03.583507 2774 apiserver.go:52] "Watching apiserver" May 14 18:01:03.597667 kubelet[2774]: I0514 18:01:03.597641 2774 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:01:03.616836 sudo[2808]: pam_unix(sudo:session): session closed for user root May 14 18:01:03.640885 kubelet[2774]: E0514 18:01:03.640827 2774 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:01:03.641093 kubelet[2774]: E0514 18:01:03.641042 2774 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 18:01:03.662107 kubelet[2774]: I0514 18:01:03.662015 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.66199771 podStartE2EDuration="1.66199771s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.653918296 +0000 UTC m=+1.127916375" watchObservedRunningTime="2025-05-14 18:01:03.66199771 +0000 UTC m=+1.135995749" May 14 18:01:03.662486 kubelet[2774]: I0514 18:01:03.662179 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6621739 podStartE2EDuration="1.6621739s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.661816078 +0000 UTC m=+1.135814157" watchObservedRunningTime="2025-05-14 18:01:03.6621739 +0000 UTC m=+1.136171939" May 14 18:01:03.686006 kubelet[2774]: I0514 18:01:03.685937 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.685921867 podStartE2EDuration="1.685921867s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.674355986 +0000 UTC m=+1.148354065" watchObservedRunningTime="2025-05-14 18:01:03.685921867 +0000 UTC m=+1.159919946" May 14 18:01:05.196270 sudo[1736]: pam_unix(sudo:session): session closed for user root May 14 18:01:05.198207 sshd[1735]: Connection closed by 10.0.0.1 port 41580 May 14 18:01:05.198805 sshd-session[1733]: pam_unix(sshd:session): session closed for user core May 14 18:01:05.203132 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:41580.service: Deactivated successfully. May 14 18:01:05.205632 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:01:05.206002 systemd[1]: session-7.scope: Consumed 6.255s CPU time, 287.1M memory peak. May 14 18:01:05.207139 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. May 14 18:01:05.209183 systemd-logind[1517]: Removed session 7. May 14 18:01:16.470503 update_engine[1523]: I20250514 18:01:16.470402 1523 update_attempter.cc:509] Updating boot flags... May 14 18:01:17.739115 kubelet[2774]: I0514 18:01:17.739076 2774 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:01:17.739642 kubelet[2774]: I0514 18:01:17.739571 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:01:17.739690 containerd[1542]: time="2025-05-14T18:01:17.739410185Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:01:18.594949 kubelet[2774]: I0514 18:01:18.594898 2774 topology_manager.go:215] "Topology Admit Handler" podUID="a79996e8-17b0-46a2-9335-5d62861c6cbf" podNamespace="kube-system" podName="kube-proxy-c7dgz" May 14 18:01:18.598963 kubelet[2774]: I0514 18:01:18.598680 2774 topology_manager.go:215] "Topology Admit Handler" podUID="6098839b-ee49-455b-aca9-d27ca604564c" podNamespace="kube-system" podName="cilium-ch2jh" May 14 18:01:18.602665 kubelet[2774]: I0514 18:01:18.602601 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cni-path\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.603893 kubelet[2774]: I0514 18:01:18.603285 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098839b-ee49-455b-aca9-d27ca604564c-clustermesh-secrets\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604219 kubelet[2774]: I0514 18:01:18.604193 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a79996e8-17b0-46a2-9335-5d62861c6cbf-lib-modules\") pod \"kube-proxy-c7dgz\" (UID: \"a79996e8-17b0-46a2-9335-5d62861c6cbf\") " pod="kube-system/kube-proxy-c7dgz" May 14 18:01:18.604270 kubelet[2774]: I0514 18:01:18.604224 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-run\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604270 kubelet[2774]: I0514 18:01:18.604241 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-net\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604270 kubelet[2774]: I0514 18:01:18.604255 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a79996e8-17b0-46a2-9335-5d62861c6cbf-kube-proxy\") pod \"kube-proxy-c7dgz\" (UID: \"a79996e8-17b0-46a2-9335-5d62861c6cbf\") " pod="kube-system/kube-proxy-c7dgz" May 14 18:01:18.604361 kubelet[2774]: I0514 18:01:18.604270 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxxk\" (UniqueName: \"kubernetes.io/projected/a79996e8-17b0-46a2-9335-5d62861c6cbf-kube-api-access-rpxxk\") pod \"kube-proxy-c7dgz\" (UID: \"a79996e8-17b0-46a2-9335-5d62861c6cbf\") " pod="kube-system/kube-proxy-c7dgz" May 14 18:01:18.604361 kubelet[2774]: I0514 18:01:18.604286 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-hostproc\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604361 kubelet[2774]: I0514 18:01:18.604321 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-etc-cni-netd\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604361 kubelet[2774]: I0514 18:01:18.604337 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-lib-modules\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604361 kubelet[2774]: I0514 18:01:18.604355 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-xtables-lock\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604373 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-bpf-maps\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604389 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-cgroup\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604402 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-hubble-tls\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604416 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a79996e8-17b0-46a2-9335-5d62861c6cbf-xtables-lock\") pod \"kube-proxy-c7dgz\" (UID: \"a79996e8-17b0-46a2-9335-5d62861c6cbf\") " pod="kube-system/kube-proxy-c7dgz" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604430 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098839b-ee49-455b-aca9-d27ca604564c-cilium-config-path\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.604459 kubelet[2774]: I0514 18:01:18.604446 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdzj\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-kube-api-access-pjdzj\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.605349 kubelet[2774]: I0514 18:01:18.604462 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-kernel\") pod \"cilium-ch2jh\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " pod="kube-system/cilium-ch2jh" May 14 18:01:18.606076 systemd[1]: Created slice kubepods-besteffort-poda79996e8_17b0_46a2_9335_5d62861c6cbf.slice - libcontainer container kubepods-besteffort-poda79996e8_17b0_46a2_9335_5d62861c6cbf.slice. May 14 18:01:18.621762 systemd[1]: Created slice kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice - libcontainer container kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice. May 14 18:01:18.872831 kubelet[2774]: I0514 18:01:18.872187 2774 topology_manager.go:215] "Topology Admit Handler" podUID="59d2438f-4061-4279-86fa-1c5ee6ae9da8" podNamespace="kube-system" podName="cilium-operator-599987898-sssjk" May 14 18:01:18.882523 systemd[1]: Created slice kubepods-besteffort-pod59d2438f_4061_4279_86fa_1c5ee6ae9da8.slice - libcontainer container kubepods-besteffort-pod59d2438f_4061_4279_86fa_1c5ee6ae9da8.slice. May 14 18:01:18.908316 kubelet[2774]: I0514 18:01:18.907150 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59d2438f-4061-4279-86fa-1c5ee6ae9da8-cilium-config-path\") pod \"cilium-operator-599987898-sssjk\" (UID: \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\") " pod="kube-system/cilium-operator-599987898-sssjk" May 14 18:01:18.908543 kubelet[2774]: I0514 18:01:18.908514 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxm4t\" (UniqueName: \"kubernetes.io/projected/59d2438f-4061-4279-86fa-1c5ee6ae9da8-kube-api-access-pxm4t\") pod \"cilium-operator-599987898-sssjk\" (UID: \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\") " pod="kube-system/cilium-operator-599987898-sssjk" May 14 18:01:18.919971 containerd[1542]: time="2025-05-14T18:01:18.919908223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7dgz,Uid:a79996e8-17b0-46a2-9335-5d62861c6cbf,Namespace:kube-system,Attempt:0,}" May 14 18:01:18.928079 containerd[1542]: time="2025-05-14T18:01:18.928040434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ch2jh,Uid:6098839b-ee49-455b-aca9-d27ca604564c,Namespace:kube-system,Attempt:0,}" May 14 18:01:18.938348 containerd[1542]: time="2025-05-14T18:01:18.938270381Z" level=info msg="connecting to shim 297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec" address="unix:///run/containerd/s/a351c79ab64afff3be7ec05cc6f24e2207c2a689f48c8432bc3d929c0990c4bb" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:18.952069 containerd[1542]: time="2025-05-14T18:01:18.952027197Z" level=info msg="connecting to shim 6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:18.965461 systemd[1]: Started cri-containerd-297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec.scope - libcontainer container 297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec. May 14 18:01:18.974091 systemd[1]: Started cri-containerd-6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d.scope - libcontainer container 6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d. May 14 18:01:18.994248 containerd[1542]: time="2025-05-14T18:01:18.994190445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7dgz,Uid:a79996e8-17b0-46a2-9335-5d62861c6cbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec\"" May 14 18:01:19.003373 containerd[1542]: time="2025-05-14T18:01:19.003329811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ch2jh,Uid:6098839b-ee49-455b-aca9-d27ca604564c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\"" May 14 18:01:19.009489 containerd[1542]: time="2025-05-14T18:01:19.009271422Z" level=info msg="CreateContainer within sandbox \"297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:01:19.011976 containerd[1542]: time="2025-05-14T18:01:19.011931035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:01:19.018447 containerd[1542]: time="2025-05-14T18:01:19.018405856Z" level=info msg="Container 64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:19.025682 containerd[1542]: time="2025-05-14T18:01:19.025634086Z" level=info msg="CreateContainer within sandbox \"297fa3edf61e7a2a32fcb08ec2d806579387aed7bbc593c9d58c316cba2ed1ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb\"" May 14 18:01:19.027318 containerd[1542]: time="2025-05-14T18:01:19.027258082Z" level=info msg="StartContainer for \"64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb\"" May 14 18:01:19.028660 containerd[1542]: time="2025-05-14T18:01:19.028612433Z" level=info msg="connecting to shim 64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb" address="unix:///run/containerd/s/a351c79ab64afff3be7ec05cc6f24e2207c2a689f48c8432bc3d929c0990c4bb" protocol=ttrpc version=3 May 14 18:01:19.051565 systemd[1]: Started cri-containerd-64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb.scope - libcontainer container 64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb. May 14 18:01:19.092466 containerd[1542]: time="2025-05-14T18:01:19.090458556Z" level=info msg="StartContainer for \"64a664ffb78cda9cec5fb80b5eb4b2d354497b0a9ce2e67e5a9a00effcce2eeb\" returns successfully" May 14 18:01:19.189379 containerd[1542]: time="2025-05-14T18:01:19.189262007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-sssjk,Uid:59d2438f-4061-4279-86fa-1c5ee6ae9da8,Namespace:kube-system,Attempt:0,}" May 14 18:01:19.214317 containerd[1542]: time="2025-05-14T18:01:19.213647796Z" level=info msg="connecting to shim 2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158" address="unix:///run/containerd/s/86a6818c6592412e479f92b066b83fa3e78bff1d93d6dbc5b640a10cc602eb89" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:19.247477 systemd[1]: Started cri-containerd-2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158.scope - libcontainer container 2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158. May 14 18:01:19.278236 containerd[1542]: time="2025-05-14T18:01:19.278183416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-sssjk,Uid:59d2438f-4061-4279-86fa-1c5ee6ae9da8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\"" May 14 18:01:19.671534 kubelet[2774]: I0514 18:01:19.671458 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c7dgz" podStartSLOduration=1.671439367 podStartE2EDuration="1.671439367s" podCreationTimestamp="2025-05-14 18:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:19.671265537 +0000 UTC m=+17.145263656" watchObservedRunningTime="2025-05-14 18:01:19.671439367 +0000 UTC m=+17.145437446" May 14 18:01:27.159136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914735051.mount: Deactivated successfully. May 14 18:01:30.714391 containerd[1542]: time="2025-05-14T18:01:30.714295220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:30.715630 containerd[1542]: time="2025-05-14T18:01:30.715585677Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 18:01:30.717005 containerd[1542]: time="2025-05-14T18:01:30.716837730Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:30.719545 containerd[1542]: time="2025-05-14T18:01:30.719500212Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.70752589s" May 14 18:01:30.719619 containerd[1542]: time="2025-05-14T18:01:30.719545457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 18:01:30.733181 containerd[1542]: time="2025-05-14T18:01:30.731732551Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:01:30.734578 containerd[1542]: time="2025-05-14T18:01:30.734525207Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:01:30.754003 containerd[1542]: time="2025-05-14T18:01:30.753096618Z" level=info msg="Container 39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:30.757741 containerd[1542]: time="2025-05-14T18:01:30.757681265Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\"" May 14 18:01:30.759187 containerd[1542]: time="2025-05-14T18:01:30.758277008Z" level=info msg="StartContainer for \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\"" May 14 18:01:30.759187 containerd[1542]: time="2025-05-14T18:01:30.759103576Z" level=info msg="connecting to shim 39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" protocol=ttrpc version=3 May 14 18:01:30.815509 systemd[1]: Started cri-containerd-39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359.scope - libcontainer container 39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359. May 14 18:01:30.885813 systemd[1]: cri-containerd-39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359.scope: Deactivated successfully. May 14 18:01:30.925119 containerd[1542]: time="2025-05-14T18:01:30.925067150Z" level=info msg="StartContainer for \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" returns successfully" May 14 18:01:30.936843 containerd[1542]: time="2025-05-14T18:01:30.936787674Z" level=info msg="received exit event container_id:\"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" id:\"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" pid:3204 exited_at:{seconds:1747245690 nanos:935435891}" May 14 18:01:30.937541 containerd[1542]: time="2025-05-14T18:01:30.937495390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" id:\"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" pid:3204 exited_at:{seconds:1747245690 nanos:935435891}" May 14 18:01:30.987954 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). May 14 18:01:30.993011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359-rootfs.mount: Deactivated successfully. May 14 18:01:31.076959 sshd[3238]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:31.078773 sshd-session[3238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:31.083311 systemd-logind[1517]: New session 8 of user core. May 14 18:01:31.090508 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:01:31.229984 sshd[3240]: Connection closed by 10.0.0.1 port 36266 May 14 18:01:31.229840 sshd-session[3238]: pam_unix(sshd:session): session closed for user core May 14 18:01:31.233414 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:36266.service: Deactivated successfully. May 14 18:01:31.235248 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:01:31.236176 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. May 14 18:01:31.237488 systemd-logind[1517]: Removed session 8. May 14 18:01:31.689909 containerd[1542]: time="2025-05-14T18:01:31.689853178Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:01:31.703697 containerd[1542]: time="2025-05-14T18:01:31.702749937Z" level=info msg="Container 179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:31.714093 containerd[1542]: time="2025-05-14T18:01:31.714040371Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\"" May 14 18:01:31.715533 containerd[1542]: time="2025-05-14T18:01:31.715511522Z" level=info msg="StartContainer for \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\"" May 14 18:01:31.718351 containerd[1542]: time="2025-05-14T18:01:31.717237138Z" level=info msg="connecting to shim 179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" protocol=ttrpc version=3 May 14 18:01:31.734506 systemd[1]: Started cri-containerd-179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593.scope - libcontainer container 179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593. May 14 18:01:31.788727 containerd[1542]: time="2025-05-14T18:01:31.788655442Z" level=info msg="StartContainer for \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" returns successfully" May 14 18:01:31.817906 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:01:31.818133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:01:31.819501 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:01:31.821194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:01:31.823588 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:01:31.824027 systemd[1]: cri-containerd-179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593.scope: Deactivated successfully. May 14 18:01:31.839232 containerd[1542]: time="2025-05-14T18:01:31.839172768Z" level=info msg="received exit event container_id:\"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" id:\"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" pid:3265 exited_at:{seconds:1747245691 nanos:831243837}" May 14 18:01:31.841695 containerd[1542]: time="2025-05-14T18:01:31.841635340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" id:\"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" pid:3265 exited_at:{seconds:1747245691 nanos:831243837}" May 14 18:01:31.857508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:01:31.862422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593-rootfs.mount: Deactivated successfully. May 14 18:01:32.693596 containerd[1542]: time="2025-05-14T18:01:32.693503346Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:01:32.707322 containerd[1542]: time="2025-05-14T18:01:32.706379096Z" level=info msg="Container 992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:32.716734 containerd[1542]: time="2025-05-14T18:01:32.716680472Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\"" May 14 18:01:32.717487 containerd[1542]: time="2025-05-14T18:01:32.717438147Z" level=info msg="StartContainer for \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\"" May 14 18:01:32.718807 containerd[1542]: time="2025-05-14T18:01:32.718784240Z" level=info msg="connecting to shim 992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" protocol=ttrpc version=3 May 14 18:01:32.743511 systemd[1]: Started cri-containerd-992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8.scope - libcontainer container 992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8. May 14 18:01:32.785578 containerd[1542]: time="2025-05-14T18:01:32.785529503Z" level=info msg="StartContainer for \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" returns successfully" May 14 18:01:32.788274 systemd[1]: cri-containerd-992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8.scope: Deactivated successfully. May 14 18:01:32.790919 containerd[1542]: time="2025-05-14T18:01:32.790790262Z" level=info msg="received exit event container_id:\"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" id:\"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" pid:3315 exited_at:{seconds:1747245692 nanos:790572201}" May 14 18:01:32.790919 containerd[1542]: time="2025-05-14T18:01:32.790884232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" id:\"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" pid:3315 exited_at:{seconds:1747245692 nanos:790572201}" May 14 18:01:32.810720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8-rootfs.mount: Deactivated successfully. May 14 18:01:33.590442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873361532.mount: Deactivated successfully. May 14 18:01:33.701953 containerd[1542]: time="2025-05-14T18:01:33.701896873Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:01:33.723719 containerd[1542]: time="2025-05-14T18:01:33.723670746Z" level=info msg="Container fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:33.732455 containerd[1542]: time="2025-05-14T18:01:33.731811602Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\"" May 14 18:01:33.734745 containerd[1542]: time="2025-05-14T18:01:33.734712158Z" level=info msg="StartContainer for \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\"" May 14 18:01:33.735645 containerd[1542]: time="2025-05-14T18:01:33.735622085Z" level=info msg="connecting to shim fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" protocol=ttrpc version=3 May 14 18:01:33.761538 systemd[1]: Started cri-containerd-fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f.scope - libcontainer container fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f. May 14 18:01:33.795408 systemd[1]: cri-containerd-fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f.scope: Deactivated successfully. May 14 18:01:33.797371 containerd[1542]: time="2025-05-14T18:01:33.797329001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" id:\"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" pid:3362 exited_at:{seconds:1747245693 nanos:796698621}" May 14 18:01:33.798890 containerd[1542]: time="2025-05-14T18:01:33.798858107Z" level=info msg="received exit event container_id:\"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" id:\"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" pid:3362 exited_at:{seconds:1747245693 nanos:796698621}" May 14 18:01:33.801713 containerd[1542]: time="2025-05-14T18:01:33.801669175Z" level=info msg="StartContainer for \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" returns successfully" May 14 18:01:33.806543 containerd[1542]: time="2025-05-14T18:01:33.801650093Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice/cri-containerd-fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f.scope/memory.events\": no such file or directory" May 14 18:01:33.828909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f-rootfs.mount: Deactivated successfully. May 14 18:01:34.209321 containerd[1542]: time="2025-05-14T18:01:34.209204001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:34.209630 containerd[1542]: time="2025-05-14T18:01:34.209581916Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 18:01:34.210455 containerd[1542]: time="2025-05-14T18:01:34.210423433Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:34.212133 containerd[1542]: time="2025-05-14T18:01:34.211711872Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.477128739s" May 14 18:01:34.212133 containerd[1542]: time="2025-05-14T18:01:34.211755876Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 18:01:34.215861 containerd[1542]: time="2025-05-14T18:01:34.215431654Z" level=info msg="CreateContainer within sandbox \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:01:34.231329 containerd[1542]: time="2025-05-14T18:01:34.229090512Z" level=info msg="Container 083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:34.235129 containerd[1542]: time="2025-05-14T18:01:34.235086504Z" level=info msg="CreateContainer within sandbox \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\"" May 14 18:01:34.235568 containerd[1542]: time="2025-05-14T18:01:34.235547946Z" level=info msg="StartContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\"" May 14 18:01:34.236682 containerd[1542]: time="2025-05-14T18:01:34.236646207Z" level=info msg="connecting to shim 083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b" address="unix:///run/containerd/s/86a6818c6592412e479f92b066b83fa3e78bff1d93d6dbc5b640a10cc602eb89" protocol=ttrpc version=3 May 14 18:01:34.268498 systemd[1]: Started cri-containerd-083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b.scope - libcontainer container 083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b. May 14 18:01:34.297123 containerd[1542]: time="2025-05-14T18:01:34.296162126Z" level=info msg="StartContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" returns successfully" May 14 18:01:34.709433 containerd[1542]: time="2025-05-14T18:01:34.709388923Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:01:34.751523 containerd[1542]: time="2025-05-14T18:01:34.751470836Z" level=info msg="Container dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:34.765240 containerd[1542]: time="2025-05-14T18:01:34.765195500Z" level=info msg="CreateContainer within sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\"" May 14 18:01:34.766861 containerd[1542]: time="2025-05-14T18:01:34.766781406Z" level=info msg="StartContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\"" May 14 18:01:34.776351 containerd[1542]: time="2025-05-14T18:01:34.776308403Z" level=info msg="connecting to shim dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09" address="unix:///run/containerd/s/d0df527cc77936e30410e14b6c5f4f15c09aef1d0f697b5941b665fab9f4a1bf" protocol=ttrpc version=3 May 14 18:01:34.797510 kubelet[2774]: I0514 18:01:34.797443 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-sssjk" podStartSLOduration=1.8639664740000002 podStartE2EDuration="16.797425426s" podCreationTimestamp="2025-05-14 18:01:18 +0000 UTC" firstStartedPulling="2025-05-14 18:01:19.279435589 +0000 UTC m=+16.753433668" lastFinishedPulling="2025-05-14 18:01:34.212894541 +0000 UTC m=+31.686892620" observedRunningTime="2025-05-14 18:01:34.758763508 +0000 UTC m=+32.232761587" watchObservedRunningTime="2025-05-14 18:01:34.797425426 +0000 UTC m=+32.271423505" May 14 18:01:34.806614 systemd[1]: Started cri-containerd-dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09.scope - libcontainer container dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09. May 14 18:01:34.879509 containerd[1542]: time="2025-05-14T18:01:34.879465298Z" level=info msg="StartContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" returns successfully" May 14 18:01:35.061013 containerd[1542]: time="2025-05-14T18:01:35.060896297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" id:\"38da026bb6b44ab72371aaf555339b640911cf13be81368388afc93c0998b8c1\" pid:3472 exited_at:{seconds:1747245695 nanos:59638425}" May 14 18:01:35.093988 kubelet[2774]: I0514 18:01:35.093918 2774 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 18:01:35.122384 kubelet[2774]: I0514 18:01:35.121857 2774 topology_manager.go:215] "Topology Admit Handler" podUID="c20a707b-68d7-404d-bf6f-be3b2554a5a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lql2b" May 14 18:01:35.122384 kubelet[2774]: I0514 18:01:35.122031 2774 topology_manager.go:215] "Topology Admit Handler" podUID="6d8175f7-1bbc-468f-9128-9d6d9912203e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wt47c" May 14 18:01:35.134127 systemd[1]: Created slice kubepods-burstable-podc20a707b_68d7_404d_bf6f_be3b2554a5a7.slice - libcontainer container kubepods-burstable-podc20a707b_68d7_404d_bf6f_be3b2554a5a7.slice. May 14 18:01:35.145411 systemd[1]: Created slice kubepods-burstable-pod6d8175f7_1bbc_468f_9128_9d6d9912203e.slice - libcontainer container kubepods-burstable-pod6d8175f7_1bbc_468f_9128_9d6d9912203e.slice. May 14 18:01:35.317458 kubelet[2774]: I0514 18:01:35.317332 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8x5\" (UniqueName: \"kubernetes.io/projected/6d8175f7-1bbc-468f-9128-9d6d9912203e-kube-api-access-lk8x5\") pod \"coredns-7db6d8ff4d-wt47c\" (UID: \"6d8175f7-1bbc-468f-9128-9d6d9912203e\") " pod="kube-system/coredns-7db6d8ff4d-wt47c" May 14 18:01:35.317458 kubelet[2774]: I0514 18:01:35.317378 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c20a707b-68d7-404d-bf6f-be3b2554a5a7-config-volume\") pod \"coredns-7db6d8ff4d-lql2b\" (UID: \"c20a707b-68d7-404d-bf6f-be3b2554a5a7\") " pod="kube-system/coredns-7db6d8ff4d-lql2b" May 14 18:01:35.317458 kubelet[2774]: I0514 18:01:35.317431 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtg8\" (UniqueName: \"kubernetes.io/projected/c20a707b-68d7-404d-bf6f-be3b2554a5a7-kube-api-access-lgtg8\") pod \"coredns-7db6d8ff4d-lql2b\" (UID: \"c20a707b-68d7-404d-bf6f-be3b2554a5a7\") " pod="kube-system/coredns-7db6d8ff4d-lql2b" May 14 18:01:35.317458 kubelet[2774]: I0514 18:01:35.317451 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8175f7-1bbc-468f-9128-9d6d9912203e-config-volume\") pod \"coredns-7db6d8ff4d-wt47c\" (UID: \"6d8175f7-1bbc-468f-9128-9d6d9912203e\") " pod="kube-system/coredns-7db6d8ff4d-wt47c" May 14 18:01:35.446591 containerd[1542]: time="2025-05-14T18:01:35.446522241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lql2b,Uid:c20a707b-68d7-404d-bf6f-be3b2554a5a7,Namespace:kube-system,Attempt:0,}" May 14 18:01:35.450174 containerd[1542]: time="2025-05-14T18:01:35.450120682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt47c,Uid:6d8175f7-1bbc-468f-9128-9d6d9912203e,Namespace:kube-system,Attempt:0,}" May 14 18:01:36.247502 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). May 14 18:01:36.322379 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:36.323683 sshd-session[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:36.328190 systemd-logind[1517]: New session 9 of user core. May 14 18:01:36.336501 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:01:36.473940 sshd[3569]: Connection closed by 10.0.0.1 port 50852 May 14 18:01:36.474739 sshd-session[3567]: pam_unix(sshd:session): session closed for user core May 14 18:01:36.478594 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:50852.service: Deactivated successfully. May 14 18:01:36.482665 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:01:36.484645 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. May 14 18:01:36.487580 systemd-logind[1517]: Removed session 9. May 14 18:01:38.005417 systemd-networkd[1447]: cilium_host: Link UP May 14 18:01:38.005641 systemd-networkd[1447]: cilium_net: Link UP May 14 18:01:38.005886 systemd-networkd[1447]: cilium_host: Gained carrier May 14 18:01:38.006105 systemd-networkd[1447]: cilium_net: Gained carrier May 14 18:01:38.098425 systemd-networkd[1447]: cilium_host: Gained IPv6LL May 14 18:01:38.103674 systemd-networkd[1447]: cilium_vxlan: Link UP May 14 18:01:38.103684 systemd-networkd[1447]: cilium_vxlan: Gained carrier May 14 18:01:38.461755 kernel: NET: Registered PF_ALG protocol family May 14 18:01:38.994421 systemd-networkd[1447]: cilium_net: Gained IPv6LL May 14 18:01:39.093800 systemd-networkd[1447]: lxc_health: Link UP May 14 18:01:39.111474 systemd-networkd[1447]: lxc_health: Gained carrier May 14 18:01:39.443427 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL May 14 18:01:39.600264 systemd-networkd[1447]: lxc6111b22519db: Link UP May 14 18:01:39.609371 kernel: eth0: renamed from tmpe9d65 May 14 18:01:39.619351 kernel: eth0: renamed from tmp859dc May 14 18:01:39.623704 systemd-networkd[1447]: lxc894e68b4494c: Link UP May 14 18:01:39.625004 systemd-networkd[1447]: lxc6111b22519db: Gained carrier May 14 18:01:39.625432 systemd-networkd[1447]: lxc894e68b4494c: Gained carrier May 14 18:01:40.914489 systemd-networkd[1447]: lxc6111b22519db: Gained IPv6LL May 14 18:01:40.915429 systemd-networkd[1447]: lxc_health: Gained IPv6LL May 14 18:01:40.979993 kubelet[2774]: I0514 18:01:40.979914 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ch2jh" podStartSLOduration=11.261752279 podStartE2EDuration="22.979897565s" podCreationTimestamp="2025-05-14 18:01:18 +0000 UTC" firstStartedPulling="2025-05-14 18:01:19.005277703 +0000 UTC m=+16.479275782" lastFinishedPulling="2025-05-14 18:01:30.723422989 +0000 UTC m=+28.197421068" observedRunningTime="2025-05-14 18:01:35.746704575 +0000 UTC m=+33.220702654" watchObservedRunningTime="2025-05-14 18:01:40.979897565 +0000 UTC m=+38.453895644" May 14 18:01:41.427853 systemd-networkd[1447]: lxc894e68b4494c: Gained IPv6LL May 14 18:01:41.496668 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:50864.service - OpenSSH per-connection server daemon (10.0.0.1:50864). May 14 18:01:41.583794 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 50864 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:41.585470 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:41.593220 systemd-logind[1517]: New session 10 of user core. May 14 18:01:41.600499 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:01:41.743277 sshd[3961]: Connection closed by 10.0.0.1 port 50864 May 14 18:01:41.744097 sshd-session[3959]: pam_unix(sshd:session): session closed for user core May 14 18:01:41.751824 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. May 14 18:01:41.752669 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:50864.service: Deactivated successfully. May 14 18:01:41.754358 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:01:41.757208 systemd-logind[1517]: Removed session 10. May 14 18:01:43.552428 containerd[1542]: time="2025-05-14T18:01:43.551989032Z" level=info msg="connecting to shim e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7" address="unix:///run/containerd/s/63834c09c87f0fb840f6733ec853e840427dea2daf46e78145547178326e28f3" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:43.559600 containerd[1542]: time="2025-05-14T18:01:43.559474764Z" level=info msg="connecting to shim 859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7" address="unix:///run/containerd/s/0f4d7420cc6bea410451df3ccff82ec6bdfbdc5b67649232625a24773b23cb3c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:43.584522 systemd[1]: Started cri-containerd-859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7.scope - libcontainer container 859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7. May 14 18:01:43.597288 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:43.617528 systemd[1]: Started cri-containerd-e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7.scope - libcontainer container e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7. May 14 18:01:43.632569 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:43.655316 containerd[1542]: time="2025-05-14T18:01:43.654834932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lql2b,Uid:c20a707b-68d7-404d-bf6f-be3b2554a5a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7\"" May 14 18:01:43.659078 containerd[1542]: time="2025-05-14T18:01:43.658935943Z" level=info msg="CreateContainer within sandbox \"859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:01:43.671647 containerd[1542]: time="2025-05-14T18:01:43.671597802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt47c,Uid:6d8175f7-1bbc-468f-9128-9d6d9912203e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7\"" May 14 18:01:43.676175 containerd[1542]: time="2025-05-14T18:01:43.676133684Z" level=info msg="CreateContainer within sandbox \"e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:01:43.726723 containerd[1542]: time="2025-05-14T18:01:43.726667790Z" level=info msg="Container b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:43.727475 containerd[1542]: time="2025-05-14T18:01:43.727385961Z" level=info msg="Container 38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:43.733818 containerd[1542]: time="2025-05-14T18:01:43.733761734Z" level=info msg="CreateContainer within sandbox \"859dcfee1dcf3a2c2356da9e792006b2969c8f264fb2991cf5e235c559bf01f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885\"" May 14 18:01:43.735732 containerd[1542]: time="2025-05-14T18:01:43.734976620Z" level=info msg="StartContainer for \"b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885\"" May 14 18:01:43.737125 containerd[1542]: time="2025-05-14T18:01:43.737093690Z" level=info msg="connecting to shim b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885" address="unix:///run/containerd/s/0f4d7420cc6bea410451df3ccff82ec6bdfbdc5b67649232625a24773b23cb3c" protocol=ttrpc version=3 May 14 18:01:43.737481 containerd[1542]: time="2025-05-14T18:01:43.737368430Z" level=info msg="CreateContainer within sandbox \"e9d65e272f4a3a357d4e86ecff057b06f1e849d7fd09df9a7dfcb0904e43f9b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4\"" May 14 18:01:43.738452 containerd[1542]: time="2025-05-14T18:01:43.738420264Z" level=info msg="StartContainer for \"38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4\"" May 14 18:01:43.739258 containerd[1542]: time="2025-05-14T18:01:43.739225322Z" level=info msg="connecting to shim 38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4" address="unix:///run/containerd/s/63834c09c87f0fb840f6733ec853e840427dea2daf46e78145547178326e28f3" protocol=ttrpc version=3 May 14 18:01:43.768526 systemd[1]: Started cri-containerd-38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4.scope - libcontainer container 38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4. May 14 18:01:43.772125 systemd[1]: Started cri-containerd-b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885.scope - libcontainer container b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885. May 14 18:01:43.801815 containerd[1542]: time="2025-05-14T18:01:43.801722717Z" level=info msg="StartContainer for \"38d8274d9e6e79d0cdd6ab14783a4962d35d1f884485c8cf6316f87c781505b4\" returns successfully" May 14 18:01:43.810254 containerd[1542]: time="2025-05-14T18:01:43.809638279Z" level=info msg="StartContainer for \"b459beeb90129b319c4f6ca7e1d1828ba85aec8ac592b6c5c8358fa3aef2c885\" returns successfully" May 14 18:01:44.537090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779678261.mount: Deactivated successfully. May 14 18:01:44.771193 kubelet[2774]: I0514 18:01:44.770463 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wt47c" podStartSLOduration=26.770447946 podStartE2EDuration="26.770447946s" podCreationTimestamp="2025-05-14 18:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:44.770100922 +0000 UTC m=+42.244099081" watchObservedRunningTime="2025-05-14 18:01:44.770447946 +0000 UTC m=+42.244446025" May 14 18:01:44.798434 kubelet[2774]: I0514 18:01:44.797949 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lql2b" podStartSLOduration=26.797931891 podStartE2EDuration="26.797931891s" podCreationTimestamp="2025-05-14 18:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:44.794642943 +0000 UTC m=+42.268640982" watchObservedRunningTime="2025-05-14 18:01:44.797931891 +0000 UTC m=+42.271929970" May 14 18:01:46.768541 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:58172.service - OpenSSH per-connection server daemon (10.0.0.1:58172). May 14 18:01:46.830062 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 58172 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:46.831496 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:46.835760 systemd-logind[1517]: New session 11 of user core. May 14 18:01:46.847506 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:01:46.966685 sshd[4157]: Connection closed by 10.0.0.1 port 58172 May 14 18:01:46.966174 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 14 18:01:46.983676 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:58172.service: Deactivated successfully. May 14 18:01:46.986823 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:01:46.987710 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. May 14 18:01:46.990472 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:58176.service - OpenSSH per-connection server daemon (10.0.0.1:58176). May 14 18:01:46.991306 systemd-logind[1517]: Removed session 11. May 14 18:01:47.038017 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 58176 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:47.039692 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:47.044092 systemd-logind[1517]: New session 12 of user core. May 14 18:01:47.056490 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:01:47.204131 sshd[4175]: Connection closed by 10.0.0.1 port 58176 May 14 18:01:47.204647 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 14 18:01:47.219171 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:58176.service: Deactivated successfully. May 14 18:01:47.223957 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:01:47.226383 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. May 14 18:01:47.229546 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). May 14 18:01:47.231395 systemd-logind[1517]: Removed session 12. May 14 18:01:47.291153 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:47.292610 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:47.297385 systemd-logind[1517]: New session 13 of user core. May 14 18:01:47.321517 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:01:47.439209 sshd[4189]: Connection closed by 10.0.0.1 port 58182 May 14 18:01:47.439569 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 14 18:01:47.443022 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:58182.service: Deactivated successfully. May 14 18:01:47.445896 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:01:47.446657 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. May 14 18:01:47.448115 systemd-logind[1517]: Removed session 13. May 14 18:01:52.451907 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). May 14 18:01:52.493504 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:52.494695 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:52.498631 systemd-logind[1517]: New session 14 of user core. May 14 18:01:52.512467 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:01:52.623980 sshd[4207]: Connection closed by 10.0.0.1 port 58198 May 14 18:01:52.624512 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 14 18:01:52.627895 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:58198.service: Deactivated successfully. May 14 18:01:52.629633 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:01:52.630327 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. May 14 18:01:52.631427 systemd-logind[1517]: Removed session 14. May 14 18:01:57.635424 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:49062.service - OpenSSH per-connection server daemon (10.0.0.1:49062). May 14 18:01:57.684224 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 49062 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:57.685454 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:57.689271 systemd-logind[1517]: New session 15 of user core. May 14 18:01:57.696434 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:01:57.806585 sshd[4222]: Connection closed by 10.0.0.1 port 49062 May 14 18:01:57.808121 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 14 18:01:57.815416 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:49062.service: Deactivated successfully. May 14 18:01:57.816974 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:01:57.817723 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. May 14 18:01:57.820084 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:49064.service - OpenSSH per-connection server daemon (10.0.0.1:49064). May 14 18:01:57.820844 systemd-logind[1517]: Removed session 15. May 14 18:01:57.869662 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 49064 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:57.871115 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:57.875551 systemd-logind[1517]: New session 16 of user core. May 14 18:01:57.886473 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:01:58.085395 sshd[4238]: Connection closed by 10.0.0.1 port 49064 May 14 18:01:58.086409 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 14 18:01:58.098741 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:49064.service: Deactivated successfully. May 14 18:01:58.101036 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:01:58.102264 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. May 14 18:01:58.104548 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). May 14 18:01:58.105361 systemd-logind[1517]: Removed session 16. May 14 18:01:58.154048 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:58.155679 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:58.159924 systemd-logind[1517]: New session 17 of user core. May 14 18:01:58.168431 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:01:59.458970 sshd[4251]: Connection closed by 10.0.0.1 port 49072 May 14 18:01:59.460504 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 14 18:01:59.469755 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:49072.service: Deactivated successfully. May 14 18:01:59.472395 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:01:59.475495 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. May 14 18:01:59.482540 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). May 14 18:01:59.484231 systemd-logind[1517]: Removed session 17. May 14 18:01:59.544057 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:59.545425 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:59.549900 systemd-logind[1517]: New session 18 of user core. May 14 18:01:59.562471 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:01:59.795878 sshd[4274]: Connection closed by 10.0.0.1 port 49082 May 14 18:01:59.797748 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 14 18:01:59.810329 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:49082.service: Deactivated successfully. May 14 18:01:59.812076 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:01:59.814048 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. May 14 18:01:59.816012 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:49098.service - OpenSSH per-connection server daemon (10.0.0.1:49098). May 14 18:01:59.817715 systemd-logind[1517]: Removed session 18. May 14 18:01:59.867124 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 49098 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:59.869182 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:59.874329 systemd-logind[1517]: New session 19 of user core. May 14 18:01:59.883525 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:02:00.006316 sshd[4288]: Connection closed by 10.0.0.1 port 49098 May 14 18:02:00.006776 sshd-session[4286]: pam_unix(sshd:session): session closed for user core May 14 18:02:00.009492 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:02:00.010139 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:49098.service: Deactivated successfully. May 14 18:02:00.014525 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. May 14 18:02:00.015940 systemd-logind[1517]: Removed session 19. May 14 18:02:05.019324 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:39820.service - OpenSSH per-connection server daemon (10.0.0.1:39820). May 14 18:02:05.074077 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 39820 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:05.075314 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:05.079863 systemd-logind[1517]: New session 20 of user core. May 14 18:02:05.086455 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:02:05.195321 sshd[4310]: Connection closed by 10.0.0.1 port 39820 May 14 18:02:05.195652 sshd-session[4308]: pam_unix(sshd:session): session closed for user core May 14 18:02:05.198229 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:39820.service: Deactivated successfully. May 14 18:02:05.200711 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:02:05.202061 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. May 14 18:02:05.203718 systemd-logind[1517]: Removed session 20. May 14 18:02:10.212073 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:39830.service - OpenSSH per-connection server daemon (10.0.0.1:39830). May 14 18:02:10.264966 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 39830 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:10.266066 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:10.270102 systemd-logind[1517]: New session 21 of user core. May 14 18:02:10.291456 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:02:10.396520 sshd[4325]: Connection closed by 10.0.0.1 port 39830 May 14 18:02:10.396832 sshd-session[4323]: pam_unix(sshd:session): session closed for user core May 14 18:02:10.399347 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:39830.service: Deactivated successfully. May 14 18:02:10.401923 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:02:10.403585 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. May 14 18:02:10.405337 systemd-logind[1517]: Removed session 21. May 14 18:02:15.411683 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:56252.service - OpenSSH per-connection server daemon (10.0.0.1:56252). May 14 18:02:15.472656 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 56252 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:15.473796 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:15.478143 systemd-logind[1517]: New session 22 of user core. May 14 18:02:15.489488 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:02:15.600367 sshd[4341]: Connection closed by 10.0.0.1 port 56252 May 14 18:02:15.601672 sshd-session[4339]: pam_unix(sshd:session): session closed for user core May 14 18:02:15.611672 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:56252.service: Deactivated successfully. May 14 18:02:15.613420 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:02:15.614721 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. May 14 18:02:15.618122 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:56264.service - OpenSSH per-connection server daemon (10.0.0.1:56264). May 14 18:02:15.619850 systemd-logind[1517]: Removed session 22. May 14 18:02:15.665024 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 56264 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:15.667009 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:15.671227 systemd-logind[1517]: New session 23 of user core. May 14 18:02:15.678442 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:02:17.293708 containerd[1542]: time="2025-05-14T18:02:17.293659758Z" level=info msg="StopContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" with timeout 30 (s)" May 14 18:02:17.295731 containerd[1542]: time="2025-05-14T18:02:17.295703830Z" level=info msg="Stop container \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" with signal terminated" May 14 18:02:17.306672 systemd[1]: cri-containerd-083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b.scope: Deactivated successfully. May 14 18:02:17.308358 containerd[1542]: time="2025-05-14T18:02:17.308266256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" id:\"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" pid:3408 exited_at:{seconds:1747245737 nanos:307778948}" May 14 18:02:17.308464 containerd[1542]: time="2025-05-14T18:02:17.308398293Z" level=info msg="received exit event container_id:\"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" id:\"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" pid:3408 exited_at:{seconds:1747245737 nanos:307778948}" May 14 18:02:17.327463 containerd[1542]: time="2025-05-14T18:02:17.327414769Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:02:17.332013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b-rootfs.mount: Deactivated successfully. May 14 18:02:17.334801 containerd[1542]: time="2025-05-14T18:02:17.334752318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" id:\"f7c541f1648869c09a1fbe1a83ee80753ad56f3ef163a4354f3ea27ce5fc3f5a\" pid:4385 exited_at:{seconds:1747245737 nanos:334043734}" May 14 18:02:17.337129 containerd[1542]: time="2025-05-14T18:02:17.337100503Z" level=info msg="StopContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" with timeout 2 (s)" May 14 18:02:17.337382 containerd[1542]: time="2025-05-14T18:02:17.337357577Z" level=info msg="Stop container \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" with signal terminated" May 14 18:02:17.345727 systemd-networkd[1447]: lxc_health: Link DOWN May 14 18:02:17.345736 systemd-networkd[1447]: lxc_health: Lost carrier May 14 18:02:17.351098 containerd[1542]: time="2025-05-14T18:02:17.351044057Z" level=info msg="StopContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" returns successfully" May 14 18:02:17.354446 containerd[1542]: time="2025-05-14T18:02:17.354400859Z" level=info msg="StopPodSandbox for \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\"" May 14 18:02:17.365083 systemd[1]: cri-containerd-dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09.scope: Deactivated successfully. May 14 18:02:17.365723 systemd[1]: cri-containerd-dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09.scope: Consumed 6.992s CPU time, 123M memory peak, 196K read from disk, 12.9M written to disk. May 14 18:02:17.367892 containerd[1542]: time="2025-05-14T18:02:17.367837465Z" level=info msg="received exit event container_id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" pid:3441 exited_at:{seconds:1747245737 nanos:367574871}" May 14 18:02:17.376412 containerd[1542]: time="2025-05-14T18:02:17.367881544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" id:\"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" pid:3441 exited_at:{seconds:1747245737 nanos:367574871}" May 14 18:02:17.376507 containerd[1542]: time="2025-05-14T18:02:17.371449420Z" level=info msg="Container to stop \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.382197 systemd[1]: cri-containerd-2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158.scope: Deactivated successfully. May 14 18:02:17.383688 containerd[1542]: time="2025-05-14T18:02:17.383647175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" id:\"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" pid:3057 exit_status:137 exited_at:{seconds:1747245737 nanos:383018430}" May 14 18:02:17.387990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09-rootfs.mount: Deactivated successfully. May 14 18:02:17.397035 containerd[1542]: time="2025-05-14T18:02:17.396898346Z" level=info msg="StopContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" returns successfully" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397370055Z" level=info msg="StopPodSandbox for \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\"" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397462173Z" level=info msg="Container to stop \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397477012Z" level=info msg="Container to stop \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397486052Z" level=info msg="Container to stop \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397493892Z" level=info msg="Container to stop \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.397691 containerd[1542]: time="2025-05-14T18:02:17.397502412Z" level=info msg="Container to stop \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:02:17.402991 systemd[1]: cri-containerd-6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d.scope: Deactivated successfully. May 14 18:02:17.416799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158-rootfs.mount: Deactivated successfully. May 14 18:02:17.421867 containerd[1542]: time="2025-05-14T18:02:17.421816884Z" level=info msg="shim disconnected" id=2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158 namespace=k8s.io May 14 18:02:17.421867 containerd[1542]: time="2025-05-14T18:02:17.421848883Z" level=warning msg="cleaning up after shim disconnected" id=2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158 namespace=k8s.io May 14 18:02:17.422033 containerd[1542]: time="2025-05-14T18:02:17.421878282Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:02:17.428771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d-rootfs.mount: Deactivated successfully. May 14 18:02:17.430643 containerd[1542]: time="2025-05-14T18:02:17.430601598Z" level=info msg="shim disconnected" id=6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d namespace=k8s.io May 14 18:02:17.430913 containerd[1542]: time="2025-05-14T18:02:17.430870032Z" level=warning msg="cleaning up after shim disconnected" id=6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d namespace=k8s.io May 14 18:02:17.431617 containerd[1542]: time="2025-05-14T18:02:17.431590975Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:02:17.438408 containerd[1542]: time="2025-05-14T18:02:17.437767271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" id:\"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" pid:2943 exit_status:137 exited_at:{seconds:1747245737 nanos:404993997}" May 14 18:02:17.439372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158-shm.mount: Deactivated successfully. May 14 18:02:17.447081 containerd[1542]: time="2025-05-14T18:02:17.447040974Z" level=info msg="TearDown network for sandbox \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" successfully" May 14 18:02:17.447665 containerd[1542]: time="2025-05-14T18:02:17.447634160Z" level=info msg="StopPodSandbox for \"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" returns successfully" May 14 18:02:17.448410 containerd[1542]: time="2025-05-14T18:02:17.447336807Z" level=info msg="received exit event sandbox_id:\"2e21bc7f37ac9578573758f1091920848a1ccabe44efe3bfbae3c4fd8557e158\" exit_status:137 exited_at:{seconds:1747245737 nanos:383018430}" May 14 18:02:17.448410 containerd[1542]: time="2025-05-14T18:02:17.447347007Z" level=info msg="received exit event sandbox_id:\"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" exit_status:137 exited_at:{seconds:1747245737 nanos:404993997}" May 14 18:02:17.463140 containerd[1542]: time="2025-05-14T18:02:17.463083879Z" level=info msg="TearDown network for sandbox \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" successfully" May 14 18:02:17.463140 containerd[1542]: time="2025-05-14T18:02:17.463130918Z" level=info msg="StopPodSandbox for \"6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d\" returns successfully" May 14 18:02:17.667546 kubelet[2774]: I0514 18:02:17.667424 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-kernel\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.667546 kubelet[2774]: I0514 18:02:17.667478 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-bpf-maps\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.667546 kubelet[2774]: I0514 18:02:17.667501 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-hubble-tls\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.667546 kubelet[2774]: I0514 18:02:17.667533 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-etc-cni-netd\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.667546 kubelet[2774]: I0514 18:02:17.667558 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxm4t\" (UniqueName: \"kubernetes.io/projected/59d2438f-4061-4279-86fa-1c5ee6ae9da8-kube-api-access-pxm4t\") pod \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\" (UID: \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667577 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjdzj\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-kube-api-access-pjdzj\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667610 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098839b-ee49-455b-aca9-d27ca604564c-clustermesh-secrets\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667657 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-run\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667688 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-net\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667707 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-lib-modules\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668773 kubelet[2774]: I0514 18:02:17.667725 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098839b-ee49-455b-aca9-d27ca604564c-cilium-config-path\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668900 kubelet[2774]: I0514 18:02:17.667745 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-xtables-lock\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668900 kubelet[2774]: I0514 18:02:17.667775 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cni-path\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668900 kubelet[2774]: I0514 18:02:17.667791 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-hostproc\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668900 kubelet[2774]: I0514 18:02:17.667807 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-cgroup\") pod \"6098839b-ee49-455b-aca9-d27ca604564c\" (UID: \"6098839b-ee49-455b-aca9-d27ca604564c\") " May 14 18:02:17.668900 kubelet[2774]: I0514 18:02:17.667824 2774 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59d2438f-4061-4279-86fa-1c5ee6ae9da8-cilium-config-path\") pod \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\" (UID: \"59d2438f-4061-4279-86fa-1c5ee6ae9da8\") " May 14 18:02:17.670149 kubelet[2774]: I0514 18:02:17.669876 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.670149 kubelet[2774]: I0514 18:02:17.669949 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.670149 kubelet[2774]: I0514 18:02:17.669969 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.672897 kubelet[2774]: I0514 18:02:17.672862 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.676803 kubelet[2774]: I0514 18:02:17.676775 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d2438f-4061-4279-86fa-1c5ee6ae9da8-kube-api-access-pxm4t" (OuterVolumeSpecName: "kube-api-access-pxm4t") pod "59d2438f-4061-4279-86fa-1c5ee6ae9da8" (UID: "59d2438f-4061-4279-86fa-1c5ee6ae9da8"). InnerVolumeSpecName "kube-api-access-pxm4t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:02:17.676915 kubelet[2774]: I0514 18:02:17.676856 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:02:17.677002 kubelet[2774]: I0514 18:02:17.676988 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677054 kubelet[2774]: I0514 18:02:17.676993 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6098839b-ee49-455b-aca9-d27ca604564c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:02:17.677113 kubelet[2774]: I0514 18:02:17.677020 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677410 kubelet[2774]: I0514 18:02:17.677032 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677410 kubelet[2774]: E0514 18:02:17.677265 2774 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:02:17.677410 kubelet[2774]: I0514 18:02:17.677332 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-hostproc" (OuterVolumeSpecName: "hostproc") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677410 kubelet[2774]: I0514 18:02:17.677354 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cni-path" (OuterVolumeSpecName: "cni-path") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677410 kubelet[2774]: I0514 18:02:17.677384 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:02:17.677832 kubelet[2774]: I0514 18:02:17.677790 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59d2438f-4061-4279-86fa-1c5ee6ae9da8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59d2438f-4061-4279-86fa-1c5ee6ae9da8" (UID: "59d2438f-4061-4279-86fa-1c5ee6ae9da8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:02:17.678929 kubelet[2774]: I0514 18:02:17.678896 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6098839b-ee49-455b-aca9-d27ca604564c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:02:17.680028 kubelet[2774]: I0514 18:02:17.679327 2774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-kube-api-access-pjdzj" (OuterVolumeSpecName: "kube-api-access-pjdzj") pod "6098839b-ee49-455b-aca9-d27ca604564c" (UID: "6098839b-ee49-455b-aca9-d27ca604564c"). InnerVolumeSpecName "kube-api-access-pjdzj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:02:17.768750 kubelet[2774]: I0514 18:02:17.768708 2774 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pxm4t\" (UniqueName: \"kubernetes.io/projected/59d2438f-4061-4279-86fa-1c5ee6ae9da8-kube-api-access-pxm4t\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768750 kubelet[2774]: I0514 18:02:17.768742 2774 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768750 kubelet[2774]: I0514 18:02:17.768754 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768750 kubelet[2774]: I0514 18:02:17.768762 2774 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768771 2774 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768778 2774 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pjdzj\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-kube-api-access-pjdzj\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768786 2774 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098839b-ee49-455b-aca9-d27ca604564c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768795 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098839b-ee49-455b-aca9-d27ca604564c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768803 2774 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768811 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768818 2774 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.768939 kubelet[2774]: I0514 18:02:17.768827 2774 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.769109 kubelet[2774]: I0514 18:02:17.768835 2774 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59d2438f-4061-4279-86fa-1c5ee6ae9da8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.769109 kubelet[2774]: I0514 18:02:17.768842 2774 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.769109 kubelet[2774]: I0514 18:02:17.768849 2774 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098839b-ee49-455b-aca9-d27ca604564c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.769109 kubelet[2774]: I0514 18:02:17.768858 2774 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098839b-ee49-455b-aca9-d27ca604564c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 18:02:17.846488 systemd[1]: Removed slice kubepods-besteffort-pod59d2438f_4061_4279_86fa_1c5ee6ae9da8.slice - libcontainer container kubepods-besteffort-pod59d2438f_4061_4279_86fa_1c5ee6ae9da8.slice. May 14 18:02:17.851469 kubelet[2774]: I0514 18:02:17.851444 2774 scope.go:117] "RemoveContainer" containerID="083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b" May 14 18:02:17.853864 systemd[1]: Removed slice kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice - libcontainer container kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice. May 14 18:02:17.853963 systemd[1]: kubepods-burstable-pod6098839b_ee49_455b_aca9_d27ca604564c.slice: Consumed 7.149s CPU time, 123.3M memory peak, 200K read from disk, 12.9M written to disk. May 14 18:02:17.854563 containerd[1542]: time="2025-05-14T18:02:17.854532654Z" level=info msg="RemoveContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\"" May 14 18:02:17.868178 containerd[1542]: time="2025-05-14T18:02:17.868138336Z" level=info msg="RemoveContainer for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" returns successfully" May 14 18:02:17.868434 kubelet[2774]: I0514 18:02:17.868403 2774 scope.go:117] "RemoveContainer" containerID="083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b" May 14 18:02:17.869002 containerd[1542]: time="2025-05-14T18:02:17.868952997Z" level=error msg="ContainerStatus for \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\": not found" May 14 18:02:17.875694 kubelet[2774]: E0514 18:02:17.875658 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\": not found" containerID="083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b" May 14 18:02:17.875780 kubelet[2774]: I0514 18:02:17.875699 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b"} err="failed to get container status \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\": rpc error: code = NotFound desc = an error occurred when try to find container \"083356a449b151625550d6eb450795d9fa00b608e539cdadd7b451fc63e3030b\": not found" May 14 18:02:17.875829 kubelet[2774]: I0514 18:02:17.875781 2774 scope.go:117] "RemoveContainer" containerID="dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09" May 14 18:02:17.878010 containerd[1542]: time="2025-05-14T18:02:17.877979186Z" level=info msg="RemoveContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\"" May 14 18:02:17.881689 containerd[1542]: time="2025-05-14T18:02:17.881654580Z" level=info msg="RemoveContainer for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" returns successfully" May 14 18:02:17.881882 kubelet[2774]: I0514 18:02:17.881858 2774 scope.go:117] "RemoveContainer" containerID="fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f" May 14 18:02:17.883162 containerd[1542]: time="2025-05-14T18:02:17.883133746Z" level=info msg="RemoveContainer for \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\"" May 14 18:02:17.886482 containerd[1542]: time="2025-05-14T18:02:17.886444948Z" level=info msg="RemoveContainer for \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" returns successfully" May 14 18:02:17.886672 kubelet[2774]: I0514 18:02:17.886609 2774 scope.go:117] "RemoveContainer" containerID="992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8" May 14 18:02:17.888841 containerd[1542]: time="2025-05-14T18:02:17.888770134Z" level=info msg="RemoveContainer for \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\"" May 14 18:02:17.892194 containerd[1542]: time="2025-05-14T18:02:17.892110816Z" level=info msg="RemoveContainer for \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" returns successfully" May 14 18:02:17.892388 kubelet[2774]: I0514 18:02:17.892362 2774 scope.go:117] "RemoveContainer" containerID="179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593" May 14 18:02:17.894071 containerd[1542]: time="2025-05-14T18:02:17.894042371Z" level=info msg="RemoveContainer for \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\"" May 14 18:02:17.896559 containerd[1542]: time="2025-05-14T18:02:17.896489554Z" level=info msg="RemoveContainer for \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" returns successfully" May 14 18:02:17.896854 kubelet[2774]: I0514 18:02:17.896685 2774 scope.go:117] "RemoveContainer" containerID="39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359" May 14 18:02:17.898943 containerd[1542]: time="2025-05-14T18:02:17.898914977Z" level=info msg="RemoveContainer for \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\"" May 14 18:02:17.901998 containerd[1542]: time="2025-05-14T18:02:17.901963986Z" level=info msg="RemoveContainer for \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" returns successfully" May 14 18:02:17.902132 kubelet[2774]: I0514 18:02:17.902112 2774 scope.go:117] "RemoveContainer" containerID="dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09" May 14 18:02:17.902366 containerd[1542]: time="2025-05-14T18:02:17.902324457Z" level=error msg="ContainerStatus for \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\": not found" May 14 18:02:17.902467 kubelet[2774]: E0514 18:02:17.902450 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\": not found" containerID="dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09" May 14 18:02:17.902503 kubelet[2774]: I0514 18:02:17.902473 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09"} err="failed to get container status \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcb5117387e6637809f400951da162a46a1e206c8b3685356ad14c3c9b3c4c09\": not found" May 14 18:02:17.902503 kubelet[2774]: I0514 18:02:17.902493 2774 scope.go:117] "RemoveContainer" containerID="fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f" May 14 18:02:17.902693 containerd[1542]: time="2025-05-14T18:02:17.902660369Z" level=error msg="ContainerStatus for \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\": not found" May 14 18:02:17.902842 kubelet[2774]: E0514 18:02:17.902786 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\": not found" containerID="fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f" May 14 18:02:17.902920 kubelet[2774]: I0514 18:02:17.902814 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f"} err="failed to get container status \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd3ac8986a7646d269f58528b75f106be9b7e391ab96654a61ec47fc02da299f\": not found" May 14 18:02:17.902984 kubelet[2774]: I0514 18:02:17.902973 2774 scope.go:117] "RemoveContainer" containerID="992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8" May 14 18:02:17.903397 containerd[1542]: time="2025-05-14T18:02:17.903286395Z" level=error msg="ContainerStatus for \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\": not found" May 14 18:02:17.903515 kubelet[2774]: E0514 18:02:17.903496 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\": not found" containerID="992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8" May 14 18:02:17.903564 kubelet[2774]: I0514 18:02:17.903524 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8"} err="failed to get container status \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"992daaee92b8b557f31350303ce06761482418eb4c847109a435300d368b51b8\": not found" May 14 18:02:17.903564 kubelet[2774]: I0514 18:02:17.903541 2774 scope.go:117] "RemoveContainer" containerID="179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593" May 14 18:02:17.907701 containerd[1542]: time="2025-05-14T18:02:17.907667132Z" level=error msg="ContainerStatus for \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\": not found" May 14 18:02:17.907877 kubelet[2774]: E0514 18:02:17.907856 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\": not found" containerID="179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593" May 14 18:02:17.907972 kubelet[2774]: I0514 18:02:17.907954 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593"} err="failed to get container status \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\": rpc error: code = NotFound desc = an error occurred when try to find container \"179f8f62a0df263b1249c5f9cdab4ace35108c7c85e3573ed374603109eb8593\": not found" May 14 18:02:17.908022 kubelet[2774]: I0514 18:02:17.908012 2774 scope.go:117] "RemoveContainer" containerID="39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359" May 14 18:02:17.908273 containerd[1542]: time="2025-05-14T18:02:17.908242759Z" level=error msg="ContainerStatus for \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\": not found" May 14 18:02:17.908393 kubelet[2774]: E0514 18:02:17.908371 2774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\": not found" containerID="39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359" May 14 18:02:17.908437 kubelet[2774]: I0514 18:02:17.908399 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359"} err="failed to get container status \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\": rpc error: code = NotFound desc = an error occurred when try to find container \"39bccc10ddb446705b52449e356eb27597f073ec0a9941e70ee8442655893359\": not found" May 14 18:02:18.331599 systemd[1]: var-lib-kubelet-pods-59d2438f\x2d4061\x2d4279\x2d86fa\x2d1c5ee6ae9da8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxm4t.mount: Deactivated successfully. May 14 18:02:18.331709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6043978b0b180eed9f457878d84f0768e9777b2c5cbd215f0cb7b3b7db18742d-shm.mount: Deactivated successfully. May 14 18:02:18.331760 systemd[1]: var-lib-kubelet-pods-6098839b\x2dee49\x2d455b\x2daca9\x2dd27ca604564c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpjdzj.mount: Deactivated successfully. May 14 18:02:18.331810 systemd[1]: var-lib-kubelet-pods-6098839b\x2dee49\x2d455b\x2daca9\x2dd27ca604564c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:02:18.331857 systemd[1]: var-lib-kubelet-pods-6098839b\x2dee49\x2d455b\x2daca9\x2dd27ca604564c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:02:18.622303 kubelet[2774]: I0514 18:02:18.622247 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d2438f-4061-4279-86fa-1c5ee6ae9da8" path="/var/lib/kubelet/pods/59d2438f-4061-4279-86fa-1c5ee6ae9da8/volumes" May 14 18:02:18.622748 kubelet[2774]: I0514 18:02:18.622711 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6098839b-ee49-455b-aca9-d27ca604564c" path="/var/lib/kubelet/pods/6098839b-ee49-455b-aca9-d27ca604564c/volumes" May 14 18:02:19.249432 sshd[4357]: Connection closed by 10.0.0.1 port 56264 May 14 18:02:19.249996 sshd-session[4355]: pam_unix(sshd:session): session closed for user core May 14 18:02:19.258094 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:56264.service: Deactivated successfully. May 14 18:02:19.259851 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:02:19.260962 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. May 14 18:02:19.263149 systemd-logind[1517]: Removed session 23. May 14 18:02:19.264867 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:56272.service - OpenSSH per-connection server daemon (10.0.0.1:56272). May 14 18:02:19.315510 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 56272 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:19.316667 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:19.321276 systemd-logind[1517]: New session 24 of user core. May 14 18:02:19.327468 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:02:20.489417 sshd[4513]: Connection closed by 10.0.0.1 port 56272 May 14 18:02:20.490095 sshd-session[4511]: pam_unix(sshd:session): session closed for user core May 14 18:02:20.499423 kubelet[2774]: I0514 18:02:20.499284 2774 topology_manager.go:215] "Topology Admit Handler" podUID="2f25370c-f949-4358-a325-a393469e0569" podNamespace="kube-system" podName="cilium-h942f" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499358 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="mount-bpf-fs" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499367 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="clean-cilium-state" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499374 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="mount-cgroup" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499379 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="apply-sysctl-overwrites" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499385 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59d2438f-4061-4279-86fa-1c5ee6ae9da8" containerName="cilium-operator" May 14 18:02:20.499423 kubelet[2774]: E0514 18:02:20.499391 2774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="cilium-agent" May 14 18:02:20.499423 kubelet[2774]: I0514 18:02:20.499413 2774 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d2438f-4061-4279-86fa-1c5ee6ae9da8" containerName="cilium-operator" May 14 18:02:20.499423 kubelet[2774]: I0514 18:02:20.499418 2774 memory_manager.go:354] "RemoveStaleState removing state" podUID="6098839b-ee49-455b-aca9-d27ca604564c" containerName="cilium-agent" May 14 18:02:20.501709 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:56272.service: Deactivated successfully. May 14 18:02:20.504320 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:02:20.504560 systemd[1]: session-24.scope: Consumed 1.079s CPU time, 24.2M memory peak. May 14 18:02:20.506706 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. May 14 18:02:20.512470 systemd[1]: Started sshd@24-10.0.0.60:22-10.0.0.1:56274.service - OpenSSH per-connection server daemon (10.0.0.1:56274). May 14 18:02:20.514639 systemd-logind[1517]: Removed session 24. May 14 18:02:20.533534 systemd[1]: Created slice kubepods-burstable-pod2f25370c_f949_4358_a325_a393469e0569.slice - libcontainer container kubepods-burstable-pod2f25370c_f949_4358_a325_a393469e0569.slice. May 14 18:02:20.566746 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 56274 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:20.567921 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:20.571643 systemd-logind[1517]: New session 25 of user core. May 14 18:02:20.582053 kubelet[2774]: I0514 18:02:20.582025 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-etc-cni-netd\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582126 kubelet[2774]: I0514 18:02:20.582064 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f25370c-f949-4358-a325-a393469e0569-clustermesh-secrets\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582126 kubelet[2774]: I0514 18:02:20.582081 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f25370c-f949-4358-a325-a393469e0569-hubble-tls\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582126 kubelet[2774]: I0514 18:02:20.582095 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-lib-modules\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582126 kubelet[2774]: I0514 18:02:20.582110 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgg99\" (UniqueName: \"kubernetes.io/projected/2f25370c-f949-4358-a325-a393469e0569-kube-api-access-zgg99\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582126 kubelet[2774]: I0514 18:02:20.582125 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-cilium-run\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582139 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-cilium-cgroup\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582153 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-host-proc-sys-kernel\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582166 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-bpf-maps\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582179 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-hostproc\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582193 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-xtables-lock\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582270 kubelet[2774]: I0514 18:02:20.582208 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-cni-path\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582460 kubelet[2774]: I0514 18:02:20.582222 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f25370c-f949-4358-a325-a393469e0569-cilium-config-path\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582460 kubelet[2774]: I0514 18:02:20.582237 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f25370c-f949-4358-a325-a393469e0569-host-proc-sys-net\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582460 kubelet[2774]: I0514 18:02:20.582253 2774 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f25370c-f949-4358-a325-a393469e0569-cilium-ipsec-secrets\") pod \"cilium-h942f\" (UID: \"2f25370c-f949-4358-a325-a393469e0569\") " pod="kube-system/cilium-h942f" May 14 18:02:20.582436 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:02:20.641647 sshd[4527]: Connection closed by 10.0.0.1 port 56274 May 14 18:02:20.642228 sshd-session[4525]: pam_unix(sshd:session): session closed for user core May 14 18:02:20.649446 systemd[1]: sshd@24-10.0.0.60:22-10.0.0.1:56274.service: Deactivated successfully. May 14 18:02:20.651032 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:02:20.651878 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. May 14 18:02:20.654715 systemd[1]: Started sshd@25-10.0.0.60:22-10.0.0.1:56286.service - OpenSSH per-connection server daemon (10.0.0.1:56286). May 14 18:02:20.656862 systemd-logind[1517]: Removed session 25. May 14 18:02:20.711715 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 56286 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:20.713044 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:20.717034 systemd-logind[1517]: New session 26 of user core. May 14 18:02:20.732431 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:02:20.839367 containerd[1542]: time="2025-05-14T18:02:20.839009230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h942f,Uid:2f25370c-f949-4358-a325-a393469e0569,Namespace:kube-system,Attempt:0,}" May 14 18:02:20.858818 containerd[1542]: time="2025-05-14T18:02:20.858768913Z" level=info msg="connecting to shim 0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:20.882494 systemd[1]: Started cri-containerd-0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d.scope - libcontainer container 0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d. May 14 18:02:20.911183 containerd[1542]: time="2025-05-14T18:02:20.911141845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h942f,Uid:2f25370c-f949-4358-a325-a393469e0569,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\"" May 14 18:02:20.913961 containerd[1542]: time="2025-05-14T18:02:20.913932234Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:02:20.919917 containerd[1542]: time="2025-05-14T18:02:20.919880087Z" level=info msg="Container 3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:20.925074 containerd[1542]: time="2025-05-14T18:02:20.925029674Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\"" May 14 18:02:20.925514 containerd[1542]: time="2025-05-14T18:02:20.925493745Z" level=info msg="StartContainer for \"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\"" May 14 18:02:20.926316 containerd[1542]: time="2025-05-14T18:02:20.926209452Z" level=info msg="connecting to shim 3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" protocol=ttrpc version=3 May 14 18:02:20.943483 systemd[1]: Started cri-containerd-3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5.scope - libcontainer container 3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5. May 14 18:02:20.968958 containerd[1542]: time="2025-05-14T18:02:20.968857640Z" level=info msg="StartContainer for \"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\" returns successfully" May 14 18:02:21.000573 systemd[1]: cri-containerd-3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5.scope: Deactivated successfully. May 14 18:02:21.003868 containerd[1542]: time="2025-05-14T18:02:21.003797252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\" id:\"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\" pid:4606 exited_at:{seconds:1747245741 nanos:3210062}" May 14 18:02:21.003945 containerd[1542]: time="2025-05-14T18:02:21.003842891Z" level=info msg="received exit event container_id:\"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\" id:\"3e010051bee3f7106a22be31a476e39f940c2b9fc303de7a7a5fb8bdb7b37fb5\" pid:4606 exited_at:{seconds:1747245741 nanos:3210062}" May 14 18:02:21.863096 containerd[1542]: time="2025-05-14T18:02:21.863032155Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:02:21.868369 containerd[1542]: time="2025-05-14T18:02:21.868269309Z" level=info msg="Container 49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:21.872242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648894646.mount: Deactivated successfully. May 14 18:02:21.874816 containerd[1542]: time="2025-05-14T18:02:21.874771442Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\"" May 14 18:02:21.875848 containerd[1542]: time="2025-05-14T18:02:21.875306113Z" level=info msg="StartContainer for \"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\"" May 14 18:02:21.876641 containerd[1542]: time="2025-05-14T18:02:21.876182179Z" level=info msg="connecting to shim 49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" protocol=ttrpc version=3 May 14 18:02:21.901469 systemd[1]: Started cri-containerd-49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e.scope - libcontainer container 49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e. May 14 18:02:21.925978 containerd[1542]: time="2025-05-14T18:02:21.925905961Z" level=info msg="StartContainer for \"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\" returns successfully" May 14 18:02:21.936577 systemd[1]: cri-containerd-49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e.scope: Deactivated successfully. May 14 18:02:21.938603 containerd[1542]: time="2025-05-14T18:02:21.937436771Z" level=info msg="received exit event container_id:\"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\" id:\"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\" pid:4651 exited_at:{seconds:1747245741 nanos:936907460}" May 14 18:02:21.938603 containerd[1542]: time="2025-05-14T18:02:21.937514370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\" id:\"49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e\" pid:4651 exited_at:{seconds:1747245741 nanos:936907460}" May 14 18:02:21.954148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49ab3514398a21e20a09155670370a25574b32e17e4eb3a3e7a882e4ad9e3a8e-rootfs.mount: Deactivated successfully. May 14 18:02:22.678388 kubelet[2774]: E0514 18:02:22.678345 2774 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:02:22.873229 containerd[1542]: time="2025-05-14T18:02:22.873052807Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:02:22.894349 containerd[1542]: time="2025-05-14T18:02:22.894284732Z" level=info msg="Container 058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:22.898691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837653460.mount: Deactivated successfully. May 14 18:02:22.903240 containerd[1542]: time="2025-05-14T18:02:22.903200400Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\"" May 14 18:02:22.903798 containerd[1542]: time="2025-05-14T18:02:22.903774671Z" level=info msg="StartContainer for \"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\"" May 14 18:02:22.905421 containerd[1542]: time="2025-05-14T18:02:22.905384767Z" level=info msg="connecting to shim 058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" protocol=ttrpc version=3 May 14 18:02:22.932480 systemd[1]: Started cri-containerd-058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1.scope - libcontainer container 058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1. May 14 18:02:22.979060 systemd[1]: cri-containerd-058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1.scope: Deactivated successfully. May 14 18:02:22.980492 containerd[1542]: time="2025-05-14T18:02:22.980460172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\" id:\"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\" pid:4696 exited_at:{seconds:1747245742 nanos:980221415}" May 14 18:02:22.980568 containerd[1542]: time="2025-05-14T18:02:22.980534451Z" level=info msg="received exit event container_id:\"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\" id:\"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\" pid:4696 exited_at:{seconds:1747245742 nanos:980221415}" May 14 18:02:22.981829 containerd[1542]: time="2025-05-14T18:02:22.981805312Z" level=info msg="StartContainer for \"058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1\" returns successfully" May 14 18:02:22.999940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058aadb18f41544f1667fd99ded9755e29de899d387b1ec9e1b3defc3548e9a1-rootfs.mount: Deactivated successfully. May 14 18:02:23.879331 containerd[1542]: time="2025-05-14T18:02:23.879165775Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:02:23.893241 containerd[1542]: time="2025-05-14T18:02:23.893199108Z" level=info msg="Container 2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:23.902447 containerd[1542]: time="2025-05-14T18:02:23.902394986Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\"" May 14 18:02:23.903085 containerd[1542]: time="2025-05-14T18:02:23.903058217Z" level=info msg="StartContainer for \"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\"" May 14 18:02:23.904074 containerd[1542]: time="2025-05-14T18:02:23.904044124Z" level=info msg="connecting to shim 2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" protocol=ttrpc version=3 May 14 18:02:23.923505 systemd[1]: Started cri-containerd-2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03.scope - libcontainer container 2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03. May 14 18:02:23.946089 systemd[1]: cri-containerd-2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03.scope: Deactivated successfully. May 14 18:02:23.948643 containerd[1542]: time="2025-05-14T18:02:23.948609170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\" id:\"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\" pid:4735 exited_at:{seconds:1747245743 nanos:946651436}" May 14 18:02:23.949464 containerd[1542]: time="2025-05-14T18:02:23.949434719Z" level=info msg="received exit event container_id:\"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\" id:\"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\" pid:4735 exited_at:{seconds:1747245743 nanos:946651436}" May 14 18:02:23.951128 containerd[1542]: time="2025-05-14T18:02:23.951100217Z" level=info msg="StartContainer for \"2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03\" returns successfully" May 14 18:02:23.966710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b6b48c96a425eb446b66d7d81c7a9e3d058e4482156a432ac5b397fceb0ef03-rootfs.mount: Deactivated successfully. May 14 18:02:24.288144 kubelet[2774]: I0514 18:02:24.287961 2774 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T18:02:24Z","lastTransitionTime":"2025-05-14T18:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 18:02:24.885091 containerd[1542]: time="2025-05-14T18:02:24.884720350Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:02:24.915551 containerd[1542]: time="2025-05-14T18:02:24.915503306Z" level=info msg="Container 2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:24.918989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474901430.mount: Deactivated successfully. May 14 18:02:24.923323 containerd[1542]: time="2025-05-14T18:02:24.923262334Z" level=info msg="CreateContainer within sandbox \"0d0c67f32687c1e92b107e7da7484a9b249782780ced379c64e78b1c0c43991d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\"" May 14 18:02:24.924003 containerd[1542]: time="2025-05-14T18:02:24.923981246Z" level=info msg="StartContainer for \"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\"" May 14 18:02:24.925082 containerd[1542]: time="2025-05-14T18:02:24.925024273Z" level=info msg="connecting to shim 2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385" address="unix:///run/containerd/s/d4582fe7dc401f12a6af71f986fbf56ba0604cfc0d5add117815d1503f208903" protocol=ttrpc version=3 May 14 18:02:24.946544 systemd[1]: Started cri-containerd-2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385.scope - libcontainer container 2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385. May 14 18:02:24.974182 containerd[1542]: time="2025-05-14T18:02:24.974145653Z" level=info msg="StartContainer for \"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" returns successfully" May 14 18:02:25.026958 containerd[1542]: time="2025-05-14T18:02:25.026914426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" id:\"6f0933b6317acb9b11409138e0608655b955e42021b9937ac42236900638d3be\" pid:4805 exited_at:{seconds:1747245745 nanos:25744199}" May 14 18:02:25.245344 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 18:02:27.110333 containerd[1542]: time="2025-05-14T18:02:27.110275735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" id:\"bda6f83d774b835f9fbea02c8d362b63a2874be10d88fba95f74ba1efad881ba\" pid:4984 exit_status:1 exited_at:{seconds:1747245747 nanos:109949417}" May 14 18:02:28.122162 systemd-networkd[1447]: lxc_health: Link UP May 14 18:02:28.122405 systemd-networkd[1447]: lxc_health: Gained carrier May 14 18:02:28.857813 kubelet[2774]: I0514 18:02:28.857614 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h942f" podStartSLOduration=8.857597101 podStartE2EDuration="8.857597101s" podCreationTimestamp="2025-05-14 18:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:25.905671877 +0000 UTC m=+83.379669996" watchObservedRunningTime="2025-05-14 18:02:28.857597101 +0000 UTC m=+86.331595260" May 14 18:02:29.242250 containerd[1542]: time="2025-05-14T18:02:29.241900994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" id:\"064acef810b491643de6d781f9ae30cd23118a416aa1180c308b28422578e9c3\" pid:5346 exited_at:{seconds:1747245749 nanos:241492356}" May 14 18:02:30.066581 systemd-networkd[1447]: lxc_health: Gained IPv6LL May 14 18:02:31.362085 containerd[1542]: time="2025-05-14T18:02:31.362001616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" id:\"5791bec90c286e1680d925fe7543a2b7c990d320dabffcb93624ee561c2fd118\" pid:5374 exited_at:{seconds:1747245751 nanos:360932018}" May 14 18:02:33.519534 containerd[1542]: time="2025-05-14T18:02:33.519489854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fbad2093a3aae56e0b10ae5d94a2f32f77d3615b68f50fe49cfad411b4e7385\" id:\"37c671f7694d2a765272f9a716f9d3bd3a2daeec0eed55bff5ca90daedb5bd70\" pid:5406 exited_at:{seconds:1747245753 nanos:519134294}" May 14 18:02:33.526894 sshd[4540]: Connection closed by 10.0.0.1 port 56286 May 14 18:02:33.527382 sshd-session[4534]: pam_unix(sshd:session): session closed for user core May 14 18:02:33.530703 systemd[1]: sshd@25-10.0.0.60:22-10.0.0.1:56286.service: Deactivated successfully. May 14 18:02:33.532707 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:02:33.534907 systemd-logind[1517]: Session 26 logged out. Waiting for processes to exit. May 14 18:02:33.535925 systemd-logind[1517]: Removed session 26.