May 27 03:01:45.829079 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 03:01:45.829100 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 03:01:45.829109 kernel: KASLR enabled May 27 03:01:45.829115 kernel: efi: EFI v2.7 by EDK II May 27 03:01:45.829120 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 May 27 03:01:45.829126 kernel: random: crng init done May 27 03:01:45.829132 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 03:01:45.829138 kernel: secureboot: Secure boot enabled May 27 03:01:45.829144 kernel: ACPI: Early table checksum verification disabled May 27 03:01:45.829151 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 27 03:01:45.829156 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 03:01:45.829162 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829168 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829174 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829181 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829189 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829195 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829201 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829207 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829213 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:01:45.829219 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 03:01:45.829225 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 03:01:45.829232 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 03:01:45.829238 kernel: NODE_DATA(0) allocated [mem 0xdc736dc0-0xdc73dfff] May 27 03:01:45.829243 kernel: Zone ranges: May 27 03:01:45.829251 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 03:01:45.829257 kernel: DMA32 empty May 27 03:01:45.829262 kernel: Normal empty May 27 03:01:45.829268 kernel: Device empty May 27 03:01:45.829274 kernel: Movable zone start for each node May 27 03:01:45.829280 kernel: Early memory node ranges May 27 03:01:45.829286 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 27 03:01:45.829292 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 27 03:01:45.829298 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 27 03:01:45.829304 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 27 03:01:45.829310 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 27 03:01:45.829316 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 27 03:01:45.829323 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 27 03:01:45.829329 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 27 03:01:45.829335 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 03:01:45.829344 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 03:01:45.829350 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 03:01:45.829357 kernel: psci: probing for conduit method from ACPI. May 27 03:01:45.829363 kernel: psci: PSCIv1.1 detected in firmware. May 27 03:01:45.829371 kernel: psci: Using standard PSCI v0.2 function IDs May 27 03:01:45.829377 kernel: psci: Trusted OS migration not required May 27 03:01:45.829383 kernel: psci: SMC Calling Convention v1.1 May 27 03:01:45.829390 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 03:01:45.829397 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 03:01:45.829403 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 03:01:45.829409 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 03:01:45.829416 kernel: Detected PIPT I-cache on CPU0 May 27 03:01:45.829422 kernel: CPU features: detected: GIC system register CPU interface May 27 03:01:45.829430 kernel: CPU features: detected: Spectre-v4 May 27 03:01:45.829436 kernel: CPU features: detected: Spectre-BHB May 27 03:01:45.829443 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 03:01:45.829449 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 03:01:45.829455 kernel: CPU features: detected: ARM erratum 1418040 May 27 03:01:45.829462 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 03:01:45.829468 kernel: alternatives: applying boot alternatives May 27 03:01:45.829475 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 03:01:45.829482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:01:45.829489 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:01:45.829495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:01:45.829503 kernel: Fallback order for Node 0: 0 May 27 03:01:45.829509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 03:01:45.829515 kernel: Policy zone: DMA May 27 03:01:45.829522 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:01:45.829528 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 03:01:45.829534 kernel: software IO TLB: area num 4. May 27 03:01:45.829541 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 03:01:45.829547 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 27 03:01:45.829554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:01:45.829560 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:01:45.829567 kernel: rcu: RCU event tracing is enabled. May 27 03:01:45.829574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:01:45.829581 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:01:45.829588 kernel: Tracing variant of Tasks RCU enabled. May 27 03:01:45.829594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:01:45.829601 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:01:45.829608 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:01:45.829614 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:01:45.829620 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 03:01:45.829627 kernel: GICv3: 256 SPIs implemented May 27 03:01:45.829633 kernel: GICv3: 0 Extended SPIs implemented May 27 03:01:45.829639 kernel: Root IRQ handler: gic_handle_irq May 27 03:01:45.829646 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 03:01:45.829653 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 03:01:45.829660 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 03:01:45.829666 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 03:01:45.829673 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 03:01:45.829679 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 03:01:45.829686 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 03:01:45.829692 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 03:01:45.829698 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:01:45.829705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 03:01:45.829711 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 03:01:45.829718 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 03:01:45.829724 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 03:01:45.829732 kernel: arm-pv: using stolen time PV May 27 03:01:45.829738 kernel: Console: colour dummy device 80x25 May 27 03:01:45.829745 kernel: ACPI: Core revision 20240827 May 27 03:01:45.829752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 03:01:45.829759 kernel: pid_max: default: 32768 minimum: 301 May 27 03:01:45.829765 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:01:45.829772 kernel: landlock: Up and running. May 27 03:01:45.829778 kernel: SELinux: Initializing. May 27 03:01:45.829785 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:01:45.829792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:01:45.829799 kernel: rcu: Hierarchical SRCU implementation. May 27 03:01:45.829806 kernel: rcu: Max phase no-delay instances is 400. May 27 03:01:45.829813 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:01:45.829839 kernel: Remapping and enabling EFI services. May 27 03:01:45.829846 kernel: smp: Bringing up secondary CPUs ... May 27 03:01:45.829852 kernel: Detected PIPT I-cache on CPU1 May 27 03:01:45.829859 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 03:01:45.829866 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 03:01:45.829875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 03:01:45.829886 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 03:01:45.829893 kernel: Detected PIPT I-cache on CPU2 May 27 03:01:45.829901 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 03:01:45.829908 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 03:01:45.829915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 03:01:45.829922 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 03:01:45.829928 kernel: Detected PIPT I-cache on CPU3 May 27 03:01:45.829935 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 03:01:45.829944 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 03:01:45.829951 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 03:01:45.829958 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 03:01:45.829965 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:01:45.829972 kernel: SMP: Total of 4 processors activated. May 27 03:01:45.829978 kernel: CPU: All CPU(s) started at EL1 May 27 03:01:45.829986 kernel: CPU features: detected: 32-bit EL0 Support May 27 03:01:45.829993 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 03:01:45.830001 kernel: CPU features: detected: Common not Private translations May 27 03:01:45.830008 kernel: CPU features: detected: CRC32 instructions May 27 03:01:45.830015 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 03:01:45.830022 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 03:01:45.830029 kernel: CPU features: detected: LSE atomic instructions May 27 03:01:45.830036 kernel: CPU features: detected: Privileged Access Never May 27 03:01:45.830043 kernel: CPU features: detected: RAS Extension Support May 27 03:01:45.830050 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 03:01:45.830057 kernel: alternatives: applying system-wide alternatives May 27 03:01:45.830065 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 03:01:45.830072 kernel: Memory: 2438880K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 127640K reserved, 0K cma-reserved) May 27 03:01:45.830080 kernel: devtmpfs: initialized May 27 03:01:45.830087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:01:45.830094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:01:45.830101 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 03:01:45.830108 kernel: 0 pages in range for non-PLT usage May 27 03:01:45.830115 kernel: 508544 pages in range for PLT usage May 27 03:01:45.830122 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:01:45.830130 kernel: SMBIOS 3.0.0 present. May 27 03:01:45.830137 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 03:01:45.830144 kernel: DMI: Memory slots populated: 1/1 May 27 03:01:45.830151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:01:45.830158 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 03:01:45.830165 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 03:01:45.830172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 03:01:45.830179 kernel: audit: initializing netlink subsys (disabled) May 27 03:01:45.830186 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 May 27 03:01:45.830195 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:01:45.830202 kernel: cpuidle: using governor menu May 27 03:01:45.830209 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 03:01:45.830216 kernel: ASID allocator initialised with 32768 entries May 27 03:01:45.830222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:01:45.830229 kernel: Serial: AMBA PL011 UART driver May 27 03:01:45.830236 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:01:45.830243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:01:45.830250 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 03:01:45.830259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 03:01:45.830265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:01:45.830273 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:01:45.830279 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 03:01:45.830286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 03:01:45.830293 kernel: ACPI: Added _OSI(Module Device) May 27 03:01:45.830300 kernel: ACPI: Added _OSI(Processor Device) May 27 03:01:45.830307 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:01:45.830314 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:01:45.830321 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:01:45.830329 kernel: ACPI: Interpreter enabled May 27 03:01:45.830336 kernel: ACPI: Using GIC for interrupt routing May 27 03:01:45.830343 kernel: ACPI: MCFG table detected, 1 entries May 27 03:01:45.830350 kernel: ACPI: CPU0 has been hot-added May 27 03:01:45.830357 kernel: ACPI: CPU1 has been hot-added May 27 03:01:45.830363 kernel: ACPI: CPU2 has been hot-added May 27 03:01:45.830370 kernel: ACPI: CPU3 has been hot-added May 27 03:01:45.830377 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 03:01:45.830384 kernel: printk: legacy console [ttyAMA0] enabled May 27 03:01:45.830392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:01:45.830529 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:01:45.830595 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 03:01:45.830655 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 03:01:45.830714 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 03:01:45.830772 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 03:01:45.830781 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 03:01:45.830791 kernel: PCI host bridge to bus 0000:00 May 27 03:01:45.830884 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 03:01:45.830946 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 03:01:45.831000 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 03:01:45.831053 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:01:45.831132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 03:01:45.831204 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:01:45.831270 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 03:01:45.831331 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 03:01:45.831393 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 03:01:45.831454 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 03:01:45.831516 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 03:01:45.831578 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 03:01:45.831634 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 03:01:45.831687 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 03:01:45.831740 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 03:01:45.831750 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 03:01:45.831757 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 03:01:45.831764 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 03:01:45.831771 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 03:01:45.831778 kernel: iommu: Default domain type: Translated May 27 03:01:45.831787 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 03:01:45.831794 kernel: efivars: Registered efivars operations May 27 03:01:45.831801 kernel: vgaarb: loaded May 27 03:01:45.831808 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 03:01:45.831820 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:01:45.831837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:01:45.831844 kernel: pnp: PnP ACPI init May 27 03:01:45.831919 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 03:01:45.831929 kernel: pnp: PnP ACPI: found 1 devices May 27 03:01:45.831939 kernel: NET: Registered PF_INET protocol family May 27 03:01:45.831946 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:01:45.831953 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:01:45.831960 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:01:45.831967 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:01:45.831974 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:01:45.831981 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:01:45.831988 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:01:45.831995 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:01:45.832004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:01:45.832011 kernel: PCI: CLS 0 bytes, default 64 May 27 03:01:45.832019 kernel: kvm [1]: HYP mode not available May 27 03:01:45.832025 kernel: Initialise system trusted keyrings May 27 03:01:45.832032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:01:45.832039 kernel: Key type asymmetric registered May 27 03:01:45.832046 kernel: Asymmetric key parser 'x509' registered May 27 03:01:45.832053 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 03:01:45.832060 kernel: io scheduler mq-deadline registered May 27 03:01:45.832069 kernel: io scheduler kyber registered May 27 03:01:45.832076 kernel: io scheduler bfq registered May 27 03:01:45.832083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 03:01:45.832090 kernel: ACPI: button: Power Button [PWRB] May 27 03:01:45.832097 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 03:01:45.832160 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 03:01:45.832170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:01:45.832177 kernel: thunder_xcv, ver 1.0 May 27 03:01:45.832183 kernel: thunder_bgx, ver 1.0 May 27 03:01:45.832192 kernel: nicpf, ver 1.0 May 27 03:01:45.832199 kernel: nicvf, ver 1.0 May 27 03:01:45.832269 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 03:01:45.832329 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T03:01:45 UTC (1748314905) May 27 03:01:45.832338 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 03:01:45.832345 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 03:01:45.832352 kernel: watchdog: NMI not fully supported May 27 03:01:45.832359 kernel: watchdog: Hard watchdog permanently disabled May 27 03:01:45.832368 kernel: NET: Registered PF_INET6 protocol family May 27 03:01:45.832375 kernel: Segment Routing with IPv6 May 27 03:01:45.832382 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:01:45.832389 kernel: NET: Registered PF_PACKET protocol family May 27 03:01:45.832396 kernel: Key type dns_resolver registered May 27 03:01:45.832403 kernel: registered taskstats version 1 May 27 03:01:45.832409 kernel: Loading compiled-in X.509 certificates May 27 03:01:45.832417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 03:01:45.832423 kernel: Demotion targets for Node 0: null May 27 03:01:45.832431 kernel: Key type .fscrypt registered May 27 03:01:45.832438 kernel: Key type fscrypt-provisioning registered May 27 03:01:45.832445 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:01:45.832452 kernel: ima: Allocated hash algorithm: sha1 May 27 03:01:45.832459 kernel: ima: No architecture policies found May 27 03:01:45.832466 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 03:01:45.832473 kernel: clk: Disabling unused clocks May 27 03:01:45.832480 kernel: PM: genpd: Disabling unused power domains May 27 03:01:45.832487 kernel: Warning: unable to open an initial console. May 27 03:01:45.832496 kernel: Freeing unused kernel memory: 39424K May 27 03:01:45.832503 kernel: Run /init as init process May 27 03:01:45.832509 kernel: with arguments: May 27 03:01:45.832516 kernel: /init May 27 03:01:45.832523 kernel: with environment: May 27 03:01:45.832530 kernel: HOME=/ May 27 03:01:45.832536 kernel: TERM=linux May 27 03:01:45.832543 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:01:45.832551 systemd[1]: Successfully made /usr/ read-only. May 27 03:01:45.832562 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:01:45.832570 systemd[1]: Detected virtualization kvm. May 27 03:01:45.832578 systemd[1]: Detected architecture arm64. May 27 03:01:45.832585 systemd[1]: Running in initrd. May 27 03:01:45.832592 systemd[1]: No hostname configured, using default hostname. May 27 03:01:45.832600 systemd[1]: Hostname set to . May 27 03:01:45.832607 systemd[1]: Initializing machine ID from VM UUID. May 27 03:01:45.832616 systemd[1]: Queued start job for default target initrd.target. May 27 03:01:45.832624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:01:45.832632 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:01:45.832640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:01:45.832648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:01:45.832655 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:01:45.832664 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:01:45.832674 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:01:45.832682 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:01:45.832690 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:01:45.832698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:01:45.832706 systemd[1]: Reached target paths.target - Path Units. May 27 03:01:45.832713 systemd[1]: Reached target slices.target - Slice Units. May 27 03:01:45.832721 systemd[1]: Reached target swap.target - Swaps. May 27 03:01:45.832728 systemd[1]: Reached target timers.target - Timer Units. May 27 03:01:45.832738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:01:45.832745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:01:45.832753 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:01:45.832761 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:01:45.832768 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:01:45.832776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:01:45.832783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:01:45.832791 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:01:45.832799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:01:45.832809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:01:45.832920 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:01:45.832931 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:01:45.832939 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:01:45.832947 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:01:45.832955 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:01:45.832962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:01:45.832970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:01:45.832981 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:01:45.832989 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:01:45.832997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:01:45.833005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:01:45.833014 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:01:45.833022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:01:45.833029 kernel: Bridge firewalling registered May 27 03:01:45.833060 systemd-journald[244]: Collecting audit messages is disabled. May 27 03:01:45.833082 systemd-journald[244]: Journal started May 27 03:01:45.833100 systemd-journald[244]: Runtime Journal (/run/log/journal/844b6515745b4d36989e5e91caf89686) is 6M, max 48.5M, 42.4M free. May 27 03:01:45.837324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:01:45.804918 systemd-modules-load[245]: Inserted module 'overlay' May 27 03:01:45.824379 systemd-modules-load[245]: Inserted module 'br_netfilter' May 27 03:01:45.841858 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:01:45.842203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:01:45.843471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:01:45.848066 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:01:45.849939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:01:45.864764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:01:45.872633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:01:45.875329 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:01:45.877807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:01:45.878842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:01:45.882954 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:01:45.886944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:01:45.900719 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 03:01:45.916644 systemd-resolved[287]: Positive Trust Anchors: May 27 03:01:45.916662 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:01:45.916694 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:01:45.921660 systemd-resolved[287]: Defaulting to hostname 'linux'. May 27 03:01:45.922614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:01:45.925489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:01:45.975855 kernel: SCSI subsystem initialized May 27 03:01:45.979845 kernel: Loading iSCSI transport class v2.0-870. May 27 03:01:45.987861 kernel: iscsi: registered transport (tcp) May 27 03:01:45.999861 kernel: iscsi: registered transport (qla4xxx) May 27 03:01:45.999889 kernel: QLogic iSCSI HBA Driver May 27 03:01:46.018194 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:01:46.035344 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:01:46.036558 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:01:46.083812 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:01:46.087151 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:01:46.147860 kernel: raid6: neonx8 gen() 15766 MB/s May 27 03:01:46.164838 kernel: raid6: neonx4 gen() 15783 MB/s May 27 03:01:46.181861 kernel: raid6: neonx2 gen() 13205 MB/s May 27 03:01:46.198848 kernel: raid6: neonx1 gen() 10557 MB/s May 27 03:01:46.215864 kernel: raid6: int64x8 gen() 6875 MB/s May 27 03:01:46.232854 kernel: raid6: int64x4 gen() 7246 MB/s May 27 03:01:46.249854 kernel: raid6: int64x2 gen() 6083 MB/s May 27 03:01:46.266860 kernel: raid6: int64x1 gen() 5049 MB/s May 27 03:01:46.266894 kernel: raid6: using algorithm neonx4 gen() 15783 MB/s May 27 03:01:46.283855 kernel: raid6: .... xor() 12394 MB/s, rmw enabled May 27 03:01:46.283872 kernel: raid6: using neon recovery algorithm May 27 03:01:46.288848 kernel: xor: measuring software checksum speed May 27 03:01:46.288880 kernel: 8regs : 21613 MB/sec May 27 03:01:46.288896 kernel: 32regs : 19487 MB/sec May 27 03:01:46.290157 kernel: arm64_neon : 28118 MB/sec May 27 03:01:46.290172 kernel: xor: using function: arm64_neon (28118 MB/sec) May 27 03:01:46.348883 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:01:46.354536 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:01:46.358945 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:01:46.384992 systemd-udevd[500]: Using default interface naming scheme 'v255'. May 27 03:01:46.389068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:01:46.390994 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:01:46.415731 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 27 03:01:46.436289 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:01:46.438515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:01:46.486423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:01:46.489421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:01:46.530842 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 03:01:46.531010 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:01:46.538013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:01:46.542993 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:01:46.543015 kernel: GPT:9289727 != 19775487 May 27 03:01:46.543032 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:01:46.543041 kernel: GPT:9289727 != 19775487 May 27 03:01:46.543049 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:01:46.543058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:01:46.538206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:01:46.544231 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:01:46.549376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:01:46.568746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:01:46.570867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:01:46.578628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:01:46.592301 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:01:46.598134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:01:46.598988 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:01:46.607061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:01:46.607935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:01:46.609640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:01:46.611423 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:01:46.613872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:01:46.615597 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:01:46.632632 disk-uuid[589]: Primary Header is updated. May 27 03:01:46.632632 disk-uuid[589]: Secondary Entries is updated. May 27 03:01:46.632632 disk-uuid[589]: Secondary Header is updated. May 27 03:01:46.635853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:01:46.639276 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:01:47.648868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:01:47.649332 disk-uuid[594]: The operation has completed successfully. May 27 03:01:47.674796 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:01:47.674926 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:01:47.699340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:01:47.720459 sh[609]: Success May 27 03:01:47.733857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:01:47.733901 kernel: device-mapper: uevent: version 1.0.3 May 27 03:01:47.734845 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:01:47.745864 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 03:01:47.770346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:01:47.773943 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:01:47.787919 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:01:47.794028 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:01:47.794067 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (621) May 27 03:01:47.795228 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 03:01:47.795928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 03:01:47.795962 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:01:47.799666 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:01:47.800914 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:01:47.802294 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:01:47.803040 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:01:47.804494 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:01:47.825269 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (650) May 27 03:01:47.825300 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 03:01:47.826196 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 03:01:47.826214 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:01:47.834852 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 03:01:47.835843 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:01:47.837466 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:01:47.902869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:01:47.905750 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:01:47.955610 systemd-networkd[793]: lo: Link UP May 27 03:01:47.955621 systemd-networkd[793]: lo: Gained carrier May 27 03:01:47.956351 systemd-networkd[793]: Enumeration completed May 27 03:01:47.956432 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:01:47.957680 systemd[1]: Reached target network.target - Network. May 27 03:01:47.958855 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:01:47.958858 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:01:47.960414 systemd-networkd[793]: eth0: Link UP May 27 03:01:47.960418 systemd-networkd[793]: eth0: Gained carrier May 27 03:01:47.960426 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:01:47.983747 ignition[700]: Ignition 2.21.0 May 27 03:01:47.983764 ignition[700]: Stage: fetch-offline May 27 03:01:47.983800 ignition[700]: no configs at "/usr/lib/ignition/base.d" May 27 03:01:47.983807 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:47.984010 ignition[700]: parsed url from cmdline: "" May 27 03:01:47.986866 systemd-networkd[793]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:01:47.984013 ignition[700]: no config URL provided May 27 03:01:47.984018 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:01:47.984025 ignition[700]: no config at "/usr/lib/ignition/user.ign" May 27 03:01:47.984044 ignition[700]: op(1): [started] loading QEMU firmware config module May 27 03:01:47.984048 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:01:47.992926 ignition[700]: op(1): [finished] loading QEMU firmware config module May 27 03:01:48.029195 ignition[700]: parsing config with SHA512: 1b5b245239f8befdd7a5f7983b2928fcb42fbc578ff32dad019eec763b6c06a4ed0445a260c86e77204e61044590f160e9379e379768c66beccf6c2b178f6184 May 27 03:01:48.035698 unknown[700]: fetched base config from "system" May 27 03:01:48.035712 unknown[700]: fetched user config from "qemu" May 27 03:01:48.036116 ignition[700]: fetch-offline: fetch-offline passed May 27 03:01:48.038004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:01:48.036168 ignition[700]: Ignition finished successfully May 27 03:01:48.039357 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:01:48.040101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:01:48.068698 ignition[810]: Ignition 2.21.0 May 27 03:01:48.068718 ignition[810]: Stage: kargs May 27 03:01:48.068866 ignition[810]: no configs at "/usr/lib/ignition/base.d" May 27 03:01:48.068875 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:48.070296 ignition[810]: kargs: kargs passed May 27 03:01:48.070358 ignition[810]: Ignition finished successfully May 27 03:01:48.074441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:01:48.076131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:01:48.099353 ignition[818]: Ignition 2.21.0 May 27 03:01:48.099372 ignition[818]: Stage: disks May 27 03:01:48.099499 ignition[818]: no configs at "/usr/lib/ignition/base.d" May 27 03:01:48.099508 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:48.100498 ignition[818]: disks: disks passed May 27 03:01:48.102070 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:01:48.100553 ignition[818]: Ignition finished successfully May 27 03:01:48.103465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:01:48.104872 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:01:48.106488 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:01:48.108152 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:01:48.109767 systemd[1]: Reached target basic.target - Basic System. May 27 03:01:48.111974 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:01:48.141856 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:01:48.145652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:01:48.148679 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:01:48.206846 kernel: EXT4-fs (vda9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 03:01:48.207229 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:01:48.208437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:01:48.210542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:01:48.212150 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:01:48.213114 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:01:48.213166 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:01:48.213189 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:01:48.226212 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:01:48.228558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:01:48.231025 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (837) May 27 03:01:48.233075 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 03:01:48.233103 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 03:01:48.233114 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:01:48.236001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:01:48.276861 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:01:48.279694 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 27 03:01:48.282471 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:01:48.285368 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:01:48.354054 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:01:48.355681 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:01:48.357150 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:01:48.384861 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 03:01:48.395038 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:01:48.415785 ignition[952]: INFO : Ignition 2.21.0 May 27 03:01:48.415785 ignition[952]: INFO : Stage: mount May 27 03:01:48.417359 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:01:48.417359 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:48.417359 ignition[952]: INFO : mount: mount passed May 27 03:01:48.417359 ignition[952]: INFO : Ignition finished successfully May 27 03:01:48.418190 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:01:48.419935 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:01:48.793602 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:01:48.795061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:01:48.812033 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (963) May 27 03:01:48.812069 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 03:01:48.812959 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 03:01:48.812984 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:01:48.817108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:01:48.840754 ignition[980]: INFO : Ignition 2.21.0 May 27 03:01:48.840754 ignition[980]: INFO : Stage: files May 27 03:01:48.842960 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:01:48.842960 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:48.842960 ignition[980]: DEBUG : files: compiled without relabeling support, skipping May 27 03:01:48.846062 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:01:48.846062 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:01:48.846062 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:01:48.846062 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:01:48.846062 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:01:48.845325 unknown[980]: wrote ssh authorized keys file for user: core May 27 03:01:48.853502 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 03:01:48.853502 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 27 03:01:48.957683 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:01:49.207991 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 03:01:49.207991 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:01:49.211644 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 03:01:49.544916 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 03:01:49.693929 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:01:49.693929 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:01:49.697487 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 03:01:49.709211 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 27 03:01:49.755968 systemd-networkd[793]: eth0: Gained IPv6LL May 27 03:01:50.122039 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 03:01:50.683607 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 03:01:50.683607 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 03:01:50.687417 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:01:50.703546 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:01:50.706877 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:01:50.709778 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:01:50.709778 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 03:01:50.709778 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:01:50.709778 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:01:50.709778 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:01:50.709778 ignition[980]: INFO : files: files passed May 27 03:01:50.709778 ignition[980]: INFO : Ignition finished successfully May 27 03:01:50.710368 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:01:50.714967 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:01:50.717951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:01:50.733466 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:01:50.734467 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:01:50.734516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:01:50.738596 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:01:50.738596 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:01:50.741771 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:01:50.740599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:01:50.742912 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:01:50.745589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:01:50.777841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:01:50.777958 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:01:50.779940 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:01:50.781626 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:01:50.783302 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:01:50.784110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:01:50.798242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:01:50.800478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:01:50.828128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:01:50.829175 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:01:50.830778 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:01:50.832343 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:01:50.832483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:01:50.834497 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:01:50.836095 systemd[1]: Stopped target basic.target - Basic System. May 27 03:01:50.837389 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:01:50.838692 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:01:50.840247 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:01:50.841736 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:01:50.843239 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:01:50.844706 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:01:50.846313 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:01:50.847814 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:01:50.849188 systemd[1]: Stopped target swap.target - Swaps. May 27 03:01:50.850332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:01:50.850465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:01:50.852281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:01:50.853741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:01:50.855273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:01:50.855901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:01:50.856795 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:01:50.856939 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:01:50.859125 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:01:50.859237 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:01:50.860608 systemd[1]: Stopped target paths.target - Path Units. May 27 03:01:50.861793 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:01:50.862597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:01:50.864159 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:01:50.865245 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:01:50.866630 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:01:50.866713 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:01:50.868358 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:01:50.868432 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:01:50.869683 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:01:50.869795 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:01:50.871121 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:01:50.871224 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:01:50.873185 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:01:50.874921 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:01:50.876245 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:01:50.876355 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:01:50.877938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:01:50.878106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:01:50.882653 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:01:50.883981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:01:50.890974 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:01:50.895466 ignition[1036]: INFO : Ignition 2.21.0 May 27 03:01:50.895466 ignition[1036]: INFO : Stage: umount May 27 03:01:50.897206 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:01:50.897206 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:01:50.897206 ignition[1036]: INFO : umount: umount passed May 27 03:01:50.897206 ignition[1036]: INFO : Ignition finished successfully May 27 03:01:50.898590 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:01:50.898717 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:01:50.902123 systemd[1]: Stopped target network.target - Network. May 27 03:01:50.902793 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:01:50.902933 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:01:50.904067 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:01:50.904110 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:01:50.905571 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:01:50.905612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:01:50.907531 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:01:50.907571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:01:50.909093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:01:50.910469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:01:50.918761 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:01:50.919737 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:01:50.922726 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:01:50.923011 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:01:50.923049 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:01:50.926269 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:01:50.929455 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:01:50.929558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:01:50.932934 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:01:50.933077 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:01:50.934792 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:01:50.934856 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:01:50.937140 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:01:50.937781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:01:50.937857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:01:50.940231 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:01:50.940278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:01:50.943093 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:01:50.943136 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:01:50.944996 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:01:50.949818 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:01:50.962375 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:01:50.963965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:01:50.965465 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:01:50.965594 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:01:50.967749 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:01:50.967855 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:01:50.969548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:01:50.969609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:01:50.970462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:01:50.970493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:01:50.971910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:01:50.971953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:01:50.974235 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:01:50.974282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:01:50.976729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:01:50.976772 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:01:50.979276 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:01:50.979324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:01:50.980958 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:01:50.981696 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:01:50.981745 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:01:50.984195 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:01:50.984240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:01:50.987289 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:01:50.987333 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:01:50.990189 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:01:50.990231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:01:50.992280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:01:50.992325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:01:51.000957 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:01:51.002257 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:01:51.003559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:01:51.006049 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:01:51.023958 systemd[1]: Switching root. May 27 03:01:51.045760 systemd-journald[244]: Journal stopped May 27 03:01:51.801962 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 27 03:01:51.802009 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:01:51.802021 kernel: SELinux: policy capability open_perms=1 May 27 03:01:51.802033 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:01:51.802042 kernel: SELinux: policy capability always_check_network=0 May 27 03:01:51.802051 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:01:51.802060 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:01:51.802069 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:01:51.802083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:01:51.802095 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:01:51.802108 kernel: audit: type=1403 audit(1748314911.225:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:01:51.802119 systemd[1]: Successfully loaded SELinux policy in 44.718ms. May 27 03:01:51.802135 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.833ms. May 27 03:01:51.802146 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:01:51.802158 systemd[1]: Detected virtualization kvm. May 27 03:01:51.802168 systemd[1]: Detected architecture arm64. May 27 03:01:51.802178 systemd[1]: Detected first boot. May 27 03:01:51.802187 systemd[1]: Initializing machine ID from VM UUID. May 27 03:01:51.802197 kernel: NET: Registered PF_VSOCK protocol family May 27 03:01:51.802206 zram_generator::config[1083]: No configuration found. May 27 03:01:51.802217 systemd[1]: Populated /etc with preset unit settings. May 27 03:01:51.802228 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:01:51.802238 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:01:51.802253 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:01:51.802263 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:01:51.802274 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:01:51.802284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:01:51.802293 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:01:51.802303 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:01:51.802313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:01:51.802323 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:01:51.802335 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:01:51.802345 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:01:51.802354 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:01:51.802365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:01:51.802375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:01:51.802390 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:01:51.802400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:01:51.802410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:01:51.802420 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 03:01:51.802431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:01:51.802441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:01:51.802451 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:01:51.802463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:01:51.802473 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:01:51.802484 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:01:51.802494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:01:51.802504 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:01:51.802515 systemd[1]: Reached target slices.target - Slice Units. May 27 03:01:51.802524 systemd[1]: Reached target swap.target - Swaps. May 27 03:01:51.802534 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:01:51.802544 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:01:51.802554 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:01:51.802564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:01:51.802575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:01:51.802585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:01:51.802595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:01:51.802607 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:01:51.802617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:01:51.802627 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:01:51.802637 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:01:51.802648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:01:51.802662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:01:51.802672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:01:51.802682 systemd[1]: Reached target machines.target - Containers. May 27 03:01:51.802693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:01:51.802704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:01:51.802714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:01:51.802724 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:01:51.802734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:01:51.802744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:01:51.802754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:01:51.802764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:01:51.802774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:01:51.802786 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:01:51.802804 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:01:51.802816 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:01:51.802833 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:01:51.802844 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:01:51.802855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:01:51.802865 kernel: fuse: init (API version 7.41) May 27 03:01:51.802875 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:01:51.802886 kernel: loop: module loaded May 27 03:01:51.802896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:01:51.802906 kernel: ACPI: bus type drm_connector registered May 27 03:01:51.802916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:01:51.802926 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:01:51.802936 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:01:51.802949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:01:51.802959 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:01:51.802969 systemd[1]: Stopped verity-setup.service. May 27 03:01:51.802979 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:01:51.802989 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:01:51.802999 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:01:51.803008 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:01:51.803018 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:01:51.803029 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:01:51.803061 systemd-journald[1148]: Collecting audit messages is disabled. May 27 03:01:51.803082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:01:51.803093 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:01:51.803103 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:01:51.803116 systemd-journald[1148]: Journal started May 27 03:01:51.803136 systemd-journald[1148]: Runtime Journal (/run/log/journal/844b6515745b4d36989e5e91caf89686) is 6M, max 48.5M, 42.4M free. May 27 03:01:51.599238 systemd[1]: Queued start job for default target multi-user.target. May 27 03:01:51.618712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:01:51.619165 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:01:51.804160 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:01:51.805886 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:01:51.807307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:01:51.807485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:01:51.808617 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:01:51.808782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:01:51.809862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:01:51.810017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:01:51.811093 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:01:51.811243 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:01:51.812309 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:01:51.812469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:01:51.813527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:01:51.814629 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:01:51.815962 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:01:51.817128 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:01:51.829244 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:01:51.831422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:01:51.833194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:01:51.834082 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:01:51.834110 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:01:51.835672 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:01:51.845672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:01:51.846598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:01:51.847635 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:01:51.849402 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:01:51.850752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:01:51.852943 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:01:51.854131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:01:51.857960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:01:51.859997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:01:51.861057 systemd-journald[1148]: Time spent on flushing to /var/log/journal/844b6515745b4d36989e5e91caf89686 is 15.898ms for 884 entries. May 27 03:01:51.861057 systemd-journald[1148]: System Journal (/var/log/journal/844b6515745b4d36989e5e91caf89686) is 8M, max 195.6M, 187.6M free. May 27 03:01:51.881995 systemd-journald[1148]: Received client request to flush runtime journal. May 27 03:01:51.862786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:01:51.865540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:01:51.868141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:01:51.869553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:01:51.875711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:01:51.883467 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:01:51.888451 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:01:51.890870 kernel: loop0: detected capacity change from 0 to 138376 May 27 03:01:51.895981 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:01:51.901483 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 27 03:01:51.901502 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 27 03:01:51.905912 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:01:51.908038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:01:51.909620 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:01:51.913248 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:01:51.921846 kernel: loop1: detected capacity change from 0 to 207008 May 27 03:01:51.932512 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:01:51.950194 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:01:51.952817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:01:51.958964 kernel: loop2: detected capacity change from 0 to 107312 May 27 03:01:51.977580 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 27 03:01:51.977603 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 27 03:01:51.982871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:01:51.984932 kernel: loop3: detected capacity change from 0 to 138376 May 27 03:01:51.991853 kernel: loop4: detected capacity change from 0 to 207008 May 27 03:01:51.997852 kernel: loop5: detected capacity change from 0 to 107312 May 27 03:01:52.002512 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:01:52.002920 (sd-merge)[1225]: Merged extensions into '/usr'. May 27 03:01:52.007675 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:01:52.008123 systemd[1]: Reloading... May 27 03:01:52.054846 zram_generator::config[1250]: No configuration found. May 27 03:01:52.145144 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:01:52.151226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:01:52.215486 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:01:52.215692 systemd[1]: Reloading finished in 207 ms. May 27 03:01:52.245864 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:01:52.247169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:01:52.263472 systemd[1]: Starting ensure-sysext.service... May 27 03:01:52.265524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:01:52.280636 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... May 27 03:01:52.280659 systemd[1]: Reloading... May 27 03:01:52.290403 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:01:52.290438 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:01:52.290686 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:01:52.290931 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:01:52.291557 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:01:52.291772 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 27 03:01:52.291846 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 27 03:01:52.295035 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:01:52.295050 systemd-tmpfiles[1286]: Skipping /boot May 27 03:01:52.307698 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:01:52.307716 systemd-tmpfiles[1286]: Skipping /boot May 27 03:01:52.328168 zram_generator::config[1313]: No configuration found. May 27 03:01:52.399304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:01:52.461430 systemd[1]: Reloading finished in 180 ms. May 27 03:01:52.479376 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:01:52.480935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:01:52.498120 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:01:52.500391 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:01:52.503990 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:01:52.509003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:01:52.512022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:01:52.514993 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:01:52.521706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:01:52.525054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:01:52.526990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:01:52.529979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:01:52.530893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:01:52.530994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:01:52.539906 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:01:52.541762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:01:52.542079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:01:52.543466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:01:52.543677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:01:52.545299 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:01:52.545468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:01:52.549555 systemd-udevd[1359]: Using default interface naming scheme 'v255'. May 27 03:01:52.551643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:01:52.553082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:01:52.556082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:01:52.566595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:01:52.567744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:01:52.567960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:01:52.569861 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:01:52.574132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:01:52.576702 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:01:52.578078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:01:52.579200 augenrules[1387]: No rules May 27 03:01:52.581014 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:01:52.581247 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:01:52.582488 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:01:52.584913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:01:52.586884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:01:52.588376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:01:52.588513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:01:52.589939 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:01:52.601033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:01:52.602365 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:01:52.614001 systemd[1]: Finished ensure-sysext.service. May 27 03:01:52.620308 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:01:52.621497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:01:52.622487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:01:52.626050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:01:52.632023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:01:52.635282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:01:52.636336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:01:52.636381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:01:52.639127 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:01:52.651961 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:01:52.653119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:01:52.658010 augenrules[1428]: /sbin/augenrules: No change May 27 03:01:52.663947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:01:52.664152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:01:52.665287 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:01:52.665472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:01:52.666503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:01:52.666651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:01:52.668021 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:01:52.668164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:01:52.675770 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 03:01:52.675877 augenrules[1460]: No rules May 27 03:01:52.678385 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:01:52.678597 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:01:52.689926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:01:52.689988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:01:52.706714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:01:52.708973 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:01:52.733946 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:01:52.735683 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:01:52.787947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:01:52.883134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:01:52.895019 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:01:52.897961 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:01:52.906987 systemd-resolved[1353]: Positive Trust Anchors: May 27 03:01:52.907000 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:01:52.907031 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:01:52.909375 systemd-networkd[1440]: lo: Link UP May 27 03:01:52.909389 systemd-networkd[1440]: lo: Gained carrier May 27 03:01:52.910470 systemd-networkd[1440]: Enumeration completed May 27 03:01:52.910587 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:01:52.913765 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:01:52.913775 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:01:52.914268 systemd-networkd[1440]: eth0: Link UP May 27 03:01:52.914389 systemd-networkd[1440]: eth0: Gained carrier May 27 03:01:52.914408 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:01:52.915190 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:01:52.917140 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:01:52.923127 systemd-resolved[1353]: Defaulting to hostname 'linux'. May 27 03:01:52.929603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:01:52.930927 systemd[1]: Reached target network.target - Network. May 27 03:01:52.931847 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:01:52.933042 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:01:52.934168 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:01:52.935455 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:01:52.937019 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:01:52.938315 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:01:52.939693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:01:52.940385 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:01:52.941092 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. May 27 03:01:52.941184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:01:52.941210 systemd[1]: Reached target paths.target - Path Units. May 27 03:01:52.942194 systemd[1]: Reached target timers.target - Timer Units. May 27 03:01:52.944268 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:01:52.946843 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:01:52.950292 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:01:52.951533 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:01:52.952488 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:01:52.952546 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-05-27 03:01:53.089612 UTC. May 27 03:01:52.952668 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:01:52.955560 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:01:52.956807 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:01:52.958507 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:01:52.959601 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:01:52.960942 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:01:52.961672 systemd[1]: Reached target basic.target - Basic System. May 27 03:01:52.962454 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:01:52.962486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:01:52.963538 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:01:52.965283 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:01:52.966908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:01:52.968569 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:01:52.971847 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:01:52.972810 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:01:52.974117 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:01:52.976220 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:01:52.980024 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:01:52.981322 jq[1507]: false May 27 03:01:52.982762 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:01:52.986220 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:01:52.988202 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:01:52.990001 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:01:52.992070 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:01:52.994984 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:01:52.996843 extend-filesystems[1508]: Found loop3 May 27 03:01:52.998010 extend-filesystems[1508]: Found loop4 May 27 03:01:52.998010 extend-filesystems[1508]: Found loop5 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda May 27 03:01:52.998010 extend-filesystems[1508]: Found vda1 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda2 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda3 May 27 03:01:52.998010 extend-filesystems[1508]: Found usr May 27 03:01:52.998010 extend-filesystems[1508]: Found vda4 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda6 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda7 May 27 03:01:52.998010 extend-filesystems[1508]: Found vda9 May 27 03:01:52.998010 extend-filesystems[1508]: Checking size of /dev/vda9 May 27 03:01:53.005302 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:01:53.008499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:01:53.008931 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:01:53.014902 jq[1524]: true May 27 03:01:53.009217 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:01:53.010017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:01:53.013201 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:01:53.014057 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:01:53.028861 extend-filesystems[1508]: Resized partition /dev/vda9 May 27 03:01:53.031341 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:01:53.032965 jq[1529]: true May 27 03:01:53.039099 extend-filesystems[1541]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:01:53.047870 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:01:53.072690 update_engine[1522]: I20250527 03:01:53.069208 1522 main.cc:92] Flatcar Update Engine starting May 27 03:01:53.082867 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:01:53.082933 tar[1528]: linux-arm64/LICENSE May 27 03:01:53.087725 dbus-daemon[1505]: [system] SELinux support is enabled May 27 03:01:53.088513 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:01:53.092917 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:01:53.092944 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:01:53.097204 update_engine[1522]: I20250527 03:01:53.096936 1522 update_check_scheduler.cc:74] Next update check in 7m29s May 27 03:01:53.097236 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:01:53.097236 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:01:53.097236 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:01:53.095356 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:01:53.106593 tar[1528]: linux-arm64/helm May 27 03:01:53.106639 extend-filesystems[1508]: Resized filesystem in /dev/vda9 May 27 03:01:53.095373 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:01:53.099564 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:01:53.102823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:01:53.111872 systemd[1]: Started update-engine.service - Update Engine. May 27 03:01:53.116013 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:01:53.125897 bash[1560]: Updated "/home/core/.ssh/authorized_keys" May 27 03:01:53.128799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:01:53.131247 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:01:53.131933 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (Power Button) May 27 03:01:53.136071 systemd-logind[1516]: New seat seat0. May 27 03:01:53.137787 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:01:53.186541 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:01:53.287684 containerd[1530]: time="2025-05-27T03:01:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:01:53.288426 containerd[1530]: time="2025-05-27T03:01:53.288393132Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:01:53.298052 containerd[1530]: time="2025-05-27T03:01:53.298012813Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.602µs" May 27 03:01:53.298052 containerd[1530]: time="2025-05-27T03:01:53.298046300Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:01:53.298128 containerd[1530]: time="2025-05-27T03:01:53.298063004Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:01:53.298237 containerd[1530]: time="2025-05-27T03:01:53.298217205Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:01:53.298280 containerd[1530]: time="2025-05-27T03:01:53.298239879Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:01:53.298280 containerd[1530]: time="2025-05-27T03:01:53.298264046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:01:53.298335 containerd[1530]: time="2025-05-27T03:01:53.298318150Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:01:53.298335 containerd[1530]: time="2025-05-27T03:01:53.298332513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:01:53.298798 containerd[1530]: time="2025-05-27T03:01:53.298659959Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:01:53.298923 containerd[1530]: time="2025-05-27T03:01:53.298906956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:01:53.298949 containerd[1530]: time="2025-05-27T03:01:53.298931446Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:01:53.298949 containerd[1530]: time="2025-05-27T03:01:53.298941775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:01:53.299514 containerd[1530]: time="2025-05-27T03:01:53.299034086Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:01:53.299514 containerd[1530]: time="2025-05-27T03:01:53.299233031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:01:53.299514 containerd[1530]: time="2025-05-27T03:01:53.299269624Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:01:53.299514 containerd[1530]: time="2025-05-27T03:01:53.299285561Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:01:53.299514 containerd[1530]: time="2025-05-27T03:01:53.299338293Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:01:53.299649 containerd[1530]: time="2025-05-27T03:01:53.299638909Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:01:53.299801 containerd[1530]: time="2025-05-27T03:01:53.299709474Z" level=info msg="metadata content store policy set" policy=shared May 27 03:01:53.304443 containerd[1530]: time="2025-05-27T03:01:53.304391156Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304462851Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304486735Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304500614Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304512637Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304524176Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304535554Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304548142Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:01:53.304561 containerd[1530]: time="2025-05-27T03:01:53.304559196Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:01:53.304714 containerd[1530]: time="2025-05-27T03:01:53.304574609Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:01:53.304714 containerd[1530]: time="2025-05-27T03:01:53.304584372Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:01:53.304714 containerd[1530]: time="2025-05-27T03:01:53.304596193Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:01:53.304761 containerd[1530]: time="2025-05-27T03:01:53.304745957Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304767381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304782631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304794170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304804821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304815715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304826729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304866510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304880107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304898061Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:01:53.304909 containerd[1530]: time="2025-05-27T03:01:53.304916135Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:01:53.305224 containerd[1530]: time="2025-05-27T03:01:53.305203760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:01:53.305224 containerd[1530]: time="2025-05-27T03:01:53.305223772Z" level=info msg="Start snapshots syncer" May 27 03:01:53.305271 containerd[1530]: time="2025-05-27T03:01:53.305248907Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:01:53.305572 containerd[1530]: time="2025-05-27T03:01:53.305477224Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:01:53.305572 containerd[1530]: time="2025-05-27T03:01:53.305537662Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:01:53.305691 containerd[1530]: time="2025-05-27T03:01:53.305605564Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305717362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305748105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305760653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305771788Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305784013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:01:53.305845 containerd[1530]: time="2025-05-27T03:01:53.305794745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305876042Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305912232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305925223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305935834Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305973638Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305987437Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.305997079Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.306006399Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.306014186Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.306023506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:01:53.306045 containerd[1530]: time="2025-05-27T03:01:53.306034036Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:01:53.306535 containerd[1530]: time="2025-05-27T03:01:53.306164514Z" level=info msg="runtime interface created" May 27 03:01:53.306535 containerd[1530]: time="2025-05-27T03:01:53.306169759Z" level=info msg="created NRI interface" May 27 03:01:53.306535 containerd[1530]: time="2025-05-27T03:01:53.306187955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:01:53.306535 containerd[1530]: time="2025-05-27T03:01:53.306199655Z" level=info msg="Connect containerd service" May 27 03:01:53.306535 containerd[1530]: time="2025-05-27T03:01:53.306225113Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:01:53.307038 containerd[1530]: time="2025-05-27T03:01:53.307005602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:01:53.409623 containerd[1530]: time="2025-05-27T03:01:53.409509993Z" level=info msg="Start subscribing containerd event" May 27 03:01:53.409623 containerd[1530]: time="2025-05-27T03:01:53.409586610Z" level=info msg="Start recovering state" May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409679889Z" level=info msg="Start event monitor" May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409693970Z" level=info msg="Start cni network conf syncer for default" May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409705428Z" level=info msg="Start streaming server" May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409714022Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409720679Z" level=info msg="runtime interface starting up..." May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409727901Z" level=info msg="starting plugins..." May 27 03:01:53.409745 containerd[1530]: time="2025-05-27T03:01:53.409741457Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:01:53.410211 containerd[1530]: time="2025-05-27T03:01:53.410187560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:01:53.410396 containerd[1530]: time="2025-05-27T03:01:53.410332038Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:01:53.413161 containerd[1530]: time="2025-05-27T03:01:53.413087816Z" level=info msg="containerd successfully booted in 0.125769s" May 27 03:01:53.413199 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:01:53.472514 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:01:53.492190 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:01:53.494826 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:01:53.516150 tar[1528]: linux-arm64/README.md May 27 03:01:53.522437 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:01:53.522620 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:01:53.524981 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:01:53.526099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:01:53.546100 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:01:53.549102 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:01:53.551736 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 03:01:53.553046 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:01:54.748009 systemd-networkd[1440]: eth0: Gained IPv6LL May 27 03:01:54.750537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:01:54.752557 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:01:54.756408 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:01:54.758715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:01:54.772419 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:01:54.788390 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:01:54.788652 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:01:54.790377 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:01:54.794692 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:01:55.313859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:01:55.315190 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:01:55.318615 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:01:55.321619 systemd[1]: Startup finished in 2.057s (kernel) + 5.606s (initrd) + 4.141s (userspace) = 11.804s. May 27 03:01:55.742872 kubelet[1635]: E0527 03:01:55.742743 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:01:55.745083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:01:55.745218 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:01:55.746913 systemd[1]: kubelet.service: Consumed 845ms CPU time, 257.5M memory peak. May 27 03:01:58.855962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:01:58.858369 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:37472.service - OpenSSH per-connection server daemon (10.0.0.1:37472). May 27 03:01:58.923913 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 37472 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:58.925653 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:58.933858 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:01:58.935736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:01:58.945201 systemd-logind[1516]: New session 1 of user core. May 27 03:01:58.967870 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:01:58.971184 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:01:58.994972 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:01:58.997360 systemd-logind[1516]: New session c1 of user core. May 27 03:01:59.129237 systemd[1652]: Queued start job for default target default.target. May 27 03:01:59.152887 systemd[1652]: Created slice app.slice - User Application Slice. May 27 03:01:59.152914 systemd[1652]: Reached target paths.target - Paths. May 27 03:01:59.152953 systemd[1652]: Reached target timers.target - Timers. May 27 03:01:59.154293 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:01:59.167908 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:01:59.168018 systemd[1652]: Reached target sockets.target - Sockets. May 27 03:01:59.168068 systemd[1652]: Reached target basic.target - Basic System. May 27 03:01:59.168096 systemd[1652]: Reached target default.target - Main User Target. May 27 03:01:59.168124 systemd[1652]: Startup finished in 164ms. May 27 03:01:59.168375 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:01:59.174086 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:01:59.238970 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:37482.service - OpenSSH per-connection server daemon (10.0.0.1:37482). May 27 03:01:59.309581 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 37482 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:59.310977 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:59.316408 systemd-logind[1516]: New session 2 of user core. May 27 03:01:59.332029 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:01:59.384706 sshd[1665]: Connection closed by 10.0.0.1 port 37482 May 27 03:01:59.386626 sshd-session[1663]: pam_unix(sshd:session): session closed for user core May 27 03:01:59.396223 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:37482.service: Deactivated successfully. May 27 03:01:59.399421 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:01:59.403903 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. May 27 03:01:59.409346 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:37490.service - OpenSSH per-connection server daemon (10.0.0.1:37490). May 27 03:01:59.410649 systemd-logind[1516]: Removed session 2. May 27 03:01:59.462551 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 37490 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:59.463965 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:59.468544 systemd-logind[1516]: New session 3 of user core. May 27 03:01:59.480010 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:01:59.528894 sshd[1673]: Connection closed by 10.0.0.1 port 37490 May 27 03:01:59.530301 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 27 03:01:59.551055 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:37490.service: Deactivated successfully. May 27 03:01:59.554476 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:01:59.555215 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. May 27 03:01:59.557581 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:37504.service - OpenSSH per-connection server daemon (10.0.0.1:37504). May 27 03:01:59.558741 systemd-logind[1516]: Removed session 3. May 27 03:01:59.646326 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 37504 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:59.647565 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:59.651890 systemd-logind[1516]: New session 4 of user core. May 27 03:01:59.658989 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:01:59.710884 sshd[1681]: Connection closed by 10.0.0.1 port 37504 May 27 03:01:59.711381 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 27 03:01:59.729584 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:37504.service: Deactivated successfully. May 27 03:01:59.731621 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:01:59.733691 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. May 27 03:01:59.736017 systemd-logind[1516]: Removed session 4. May 27 03:01:59.738018 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:37520.service - OpenSSH per-connection server daemon (10.0.0.1:37520). May 27 03:01:59.802544 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 37520 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:59.803932 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:59.808818 systemd-logind[1516]: New session 5 of user core. May 27 03:01:59.824035 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:01:59.891626 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:01:59.891930 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:01:59.907688 sudo[1690]: pam_unix(sudo:session): session closed for user root May 27 03:01:59.909421 sshd[1689]: Connection closed by 10.0.0.1 port 37520 May 27 03:01:59.910041 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 27 03:01:59.930214 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:37520.service: Deactivated successfully. May 27 03:01:59.932005 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:01:59.932782 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. May 27 03:01:59.935798 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:37522.service - OpenSSH per-connection server daemon (10.0.0.1:37522). May 27 03:01:59.936236 systemd-logind[1516]: Removed session 5. May 27 03:01:59.992242 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 37522 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:01:59.993621 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:01:59.997846 systemd-logind[1516]: New session 6 of user core. May 27 03:02:00.007065 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:02:00.059387 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:02:00.059693 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:02:00.164030 sudo[1700]: pam_unix(sudo:session): session closed for user root May 27 03:02:00.169352 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:02:00.169620 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:02:00.179076 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:02:00.226006 augenrules[1722]: No rules May 27 03:02:00.227307 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:02:00.227541 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:02:00.228446 sudo[1699]: pam_unix(sudo:session): session closed for user root May 27 03:02:00.229677 sshd[1698]: Connection closed by 10.0.0.1 port 37522 May 27 03:02:00.230090 sshd-session[1696]: pam_unix(sshd:session): session closed for user core May 27 03:02:00.241115 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:37522.service: Deactivated successfully. May 27 03:02:00.243245 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:02:00.245472 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. May 27 03:02:00.248111 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:37536.service - OpenSSH per-connection server daemon (10.0.0.1:37536). May 27 03:02:00.248624 systemd-logind[1516]: Removed session 6. May 27 03:02:00.313798 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 37536 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:02:00.315163 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:02:00.319604 systemd-logind[1516]: New session 7 of user core. May 27 03:02:00.330000 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:02:00.381596 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:02:00.381916 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:02:00.750752 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:02:00.772270 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:02:01.061657 dockerd[1754]: time="2025-05-27T03:02:01.061502835Z" level=info msg="Starting up" May 27 03:02:01.063589 dockerd[1754]: time="2025-05-27T03:02:01.063538601Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:02:01.102519 dockerd[1754]: time="2025-05-27T03:02:01.102471713Z" level=info msg="Loading containers: start." May 27 03:02:01.109860 kernel: Initializing XFRM netlink socket May 27 03:02:01.327533 systemd-networkd[1440]: docker0: Link UP May 27 03:02:01.331701 dockerd[1754]: time="2025-05-27T03:02:01.331652433Z" level=info msg="Loading containers: done." May 27 03:02:01.345467 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1923286759-merged.mount: Deactivated successfully. May 27 03:02:01.348156 dockerd[1754]: time="2025-05-27T03:02:01.348102570Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:02:01.348271 dockerd[1754]: time="2025-05-27T03:02:01.348197819Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:02:01.348330 dockerd[1754]: time="2025-05-27T03:02:01.348310356Z" level=info msg="Initializing buildkit" May 27 03:02:01.371490 dockerd[1754]: time="2025-05-27T03:02:01.371441297Z" level=info msg="Completed buildkit initialization" May 27 03:02:01.377575 dockerd[1754]: time="2025-05-27T03:02:01.377517840Z" level=info msg="Daemon has completed initialization" May 27 03:02:01.378174 dockerd[1754]: time="2025-05-27T03:02:01.377585627Z" level=info msg="API listen on /run/docker.sock" May 27 03:02:01.377752 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:02:01.934607 containerd[1530]: time="2025-05-27T03:02:01.934568762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:02:02.509469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204641489.mount: Deactivated successfully. May 27 03:02:03.517511 containerd[1530]: time="2025-05-27T03:02:03.517431607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:03.518041 containerd[1530]: time="2025-05-27T03:02:03.518006722Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 27 03:02:03.519128 containerd[1530]: time="2025-05-27T03:02:03.519093387Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:03.521925 containerd[1530]: time="2025-05-27T03:02:03.521880646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:03.523077 containerd[1530]: time="2025-05-27T03:02:03.522899725Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.588288391s" May 27 03:02:03.523077 containerd[1530]: time="2025-05-27T03:02:03.522937977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 27 03:02:03.523654 containerd[1530]: time="2025-05-27T03:02:03.523624194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:02:04.930426 containerd[1530]: time="2025-05-27T03:02:04.930359556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:04.930859 containerd[1530]: time="2025-05-27T03:02:04.930791785Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 27 03:02:04.931658 containerd[1530]: time="2025-05-27T03:02:04.931628486Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:04.934695 containerd[1530]: time="2025-05-27T03:02:04.934656220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:04.935578 containerd[1530]: time="2025-05-27T03:02:04.935540081Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.411786265s" May 27 03:02:04.935610 containerd[1530]: time="2025-05-27T03:02:04.935576314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 27 03:02:04.937489 containerd[1530]: time="2025-05-27T03:02:04.937172670Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:02:05.995954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:02:05.997776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:06.171714 containerd[1530]: time="2025-05-27T03:02:06.171674943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:06.173416 containerd[1530]: time="2025-05-27T03:02:06.173381333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 27 03:02:06.174688 containerd[1530]: time="2025-05-27T03:02:06.174464926Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:06.177735 containerd[1530]: time="2025-05-27T03:02:06.177704583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:06.178749 containerd[1530]: time="2025-05-27T03:02:06.178714461Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.241420844s" May 27 03:02:06.178811 containerd[1530]: time="2025-05-27T03:02:06.178751158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 27 03:02:06.179183 containerd[1530]: time="2025-05-27T03:02:06.179147192Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:02:06.179768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:06.184301 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:02:06.223107 kubelet[2033]: E0527 03:02:06.223037 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:02:06.226371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:02:06.226510 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:02:06.226808 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.1M memory peak. May 27 03:02:07.132146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176174177.mount: Deactivated successfully. May 27 03:02:07.484584 containerd[1530]: time="2025-05-27T03:02:07.484464838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:07.485070 containerd[1530]: time="2025-05-27T03:02:07.485003558Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 27 03:02:07.485757 containerd[1530]: time="2025-05-27T03:02:07.485711187Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:07.487643 containerd[1530]: time="2025-05-27T03:02:07.487585328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:07.488214 containerd[1530]: time="2025-05-27T03:02:07.488050230Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.308863776s" May 27 03:02:07.488214 containerd[1530]: time="2025-05-27T03:02:07.488087159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 27 03:02:07.488675 containerd[1530]: time="2025-05-27T03:02:07.488643902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:02:08.129752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490956966.mount: Deactivated successfully. May 27 03:02:08.796596 containerd[1530]: time="2025-05-27T03:02:08.796446509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:08.797435 containerd[1530]: time="2025-05-27T03:02:08.797401715Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 27 03:02:08.798122 containerd[1530]: time="2025-05-27T03:02:08.798065086Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:08.801133 containerd[1530]: time="2025-05-27T03:02:08.801099817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:08.802947 containerd[1530]: time="2025-05-27T03:02:08.802919211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.31424661s" May 27 03:02:08.803003 containerd[1530]: time="2025-05-27T03:02:08.802951316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 03:02:08.803360 containerd[1530]: time="2025-05-27T03:02:08.803342997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:02:09.271346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760487952.mount: Deactivated successfully. May 27 03:02:09.276291 containerd[1530]: time="2025-05-27T03:02:09.276102864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:02:09.276754 containerd[1530]: time="2025-05-27T03:02:09.276721964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 03:02:09.277824 containerd[1530]: time="2025-05-27T03:02:09.277776721Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:02:09.280268 containerd[1530]: time="2025-05-27T03:02:09.280217410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:02:09.280771 containerd[1530]: time="2025-05-27T03:02:09.280637098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 477.270745ms" May 27 03:02:09.280771 containerd[1530]: time="2025-05-27T03:02:09.280666789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 03:02:09.281342 containerd[1530]: time="2025-05-27T03:02:09.281322360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:02:09.840992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608498599.mount: Deactivated successfully. May 27 03:02:11.430114 containerd[1530]: time="2025-05-27T03:02:11.430051364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:11.430484 containerd[1530]: time="2025-05-27T03:02:11.430431670Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 27 03:02:11.431506 containerd[1530]: time="2025-05-27T03:02:11.431467344Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:11.434024 containerd[1530]: time="2025-05-27T03:02:11.433993760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:11.435097 containerd[1530]: time="2025-05-27T03:02:11.435064128Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.153716412s" May 27 03:02:11.435141 containerd[1530]: time="2025-05-27T03:02:11.435097298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 03:02:15.676217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:15.676357 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.1M memory peak. May 27 03:02:15.678372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:15.699784 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... May 27 03:02:15.699799 systemd[1]: Reloading... May 27 03:02:15.770859 zram_generator::config[2237]: No configuration found. May 27 03:02:15.888552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:02:15.973188 systemd[1]: Reloading finished in 273 ms. May 27 03:02:16.020232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:16.022905 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:02:16.023149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:16.023202 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.2M memory peak. May 27 03:02:16.024808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:16.156796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:16.160704 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:02:16.198790 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:02:16.198790 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:02:16.198790 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:02:16.199125 kubelet[2281]: I0527 03:02:16.198858 2281 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:02:17.329724 kubelet[2281]: I0527 03:02:17.329673 2281 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:02:17.329724 kubelet[2281]: I0527 03:02:17.329711 2281 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:02:17.330097 kubelet[2281]: I0527 03:02:17.329997 2281 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:02:17.359806 kubelet[2281]: E0527 03:02:17.359761 2281 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 27 03:02:17.360713 kubelet[2281]: I0527 03:02:17.360693 2281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:02:17.373887 kubelet[2281]: I0527 03:02:17.373846 2281 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:02:17.376734 kubelet[2281]: I0527 03:02:17.376703 2281 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:02:17.379182 kubelet[2281]: I0527 03:02:17.379118 2281 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:02:17.379372 kubelet[2281]: I0527 03:02:17.379179 2281 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:02:17.379474 kubelet[2281]: I0527 03:02:17.379437 2281 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:02:17.379474 kubelet[2281]: I0527 03:02:17.379447 2281 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:02:17.379678 kubelet[2281]: I0527 03:02:17.379652 2281 state_mem.go:36] "Initialized new in-memory state store" May 27 03:02:17.382081 kubelet[2281]: I0527 03:02:17.382055 2281 kubelet.go:446] "Attempting to sync node with API server" May 27 03:02:17.382104 kubelet[2281]: I0527 03:02:17.382086 2281 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:02:17.382136 kubelet[2281]: I0527 03:02:17.382112 2281 kubelet.go:352] "Adding apiserver pod source" May 27 03:02:17.382136 kubelet[2281]: I0527 03:02:17.382125 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:02:17.385754 kubelet[2281]: I0527 03:02:17.385676 2281 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:02:17.386432 kubelet[2281]: I0527 03:02:17.386409 2281 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:02:17.386601 kubelet[2281]: W0527 03:02:17.386585 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:02:17.387132 kubelet[2281]: W0527 03:02:17.387075 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 27 03:02:17.387278 kubelet[2281]: E0527 03:02:17.387257 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 27 03:02:17.387733 kubelet[2281]: I0527 03:02:17.387709 2281 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:02:17.387783 kubelet[2281]: I0527 03:02:17.387752 2281 server.go:1287] "Started kubelet" May 27 03:02:17.388008 kubelet[2281]: W0527 03:02:17.387950 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 27 03:02:17.388047 kubelet[2281]: E0527 03:02:17.388005 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 27 03:02:17.388097 kubelet[2281]: I0527 03:02:17.388070 2281 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:02:17.389548 kubelet[2281]: I0527 03:02:17.388952 2281 server.go:479] "Adding debug handlers to kubelet server" May 27 03:02:17.391233 kubelet[2281]: I0527 03:02:17.391203 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:02:17.396619 kubelet[2281]: I0527 03:02:17.396569 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:02:17.397495 kubelet[2281]: E0527 03:02:17.397242 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843432c2e81456c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:02:17.387730284 +0000 UTC m=+1.223792000,LastTimestamp:2025-05-27 03:02:17.387730284 +0000 UTC m=+1.223792000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:02:17.398016 kubelet[2281]: E0527 03:02:17.397582 2281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:02:17.398016 kubelet[2281]: I0527 03:02:17.397663 2281 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:02:17.398144 kubelet[2281]: I0527 03:02:17.398119 2281 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:02:17.398227 kubelet[2281]: I0527 03:02:17.398200 2281 reconciler.go:26] "Reconciler: start to sync state" May 27 03:02:17.398339 kubelet[2281]: I0527 03:02:17.398272 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:02:17.398552 kubelet[2281]: W0527 03:02:17.398507 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 27 03:02:17.398577 kubelet[2281]: E0527 03:02:17.398561 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 27 03:02:17.398839 kubelet[2281]: E0527 03:02:17.398624 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" May 27 03:02:17.400557 kubelet[2281]: I0527 03:02:17.400530 2281 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:02:17.401057 kubelet[2281]: I0527 03:02:17.401003 2281 factory.go:221] Registration of the systemd container factory successfully May 27 03:02:17.401101 kubelet[2281]: E0527 03:02:17.401059 2281 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:02:17.401233 kubelet[2281]: I0527 03:02:17.401206 2281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:02:17.402354 kubelet[2281]: I0527 03:02:17.402331 2281 factory.go:221] Registration of the containerd container factory successfully May 27 03:02:17.410702 kubelet[2281]: I0527 03:02:17.410483 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:02:17.411990 kubelet[2281]: I0527 03:02:17.411936 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:02:17.411990 kubelet[2281]: I0527 03:02:17.411959 2281 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:02:17.412238 kubelet[2281]: I0527 03:02:17.412154 2281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:02:17.412238 kubelet[2281]: I0527 03:02:17.412173 2281 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:02:17.412238 kubelet[2281]: E0527 03:02:17.412210 2281 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:02:17.413682 kubelet[2281]: W0527 03:02:17.413287 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused May 27 03:02:17.413682 kubelet[2281]: E0527 03:02:17.413328 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" May 27 03:02:17.416551 kubelet[2281]: I0527 03:02:17.416527 2281 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:02:17.416551 kubelet[2281]: I0527 03:02:17.416547 2281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:02:17.416655 kubelet[2281]: I0527 03:02:17.416570 2281 state_mem.go:36] "Initialized new in-memory state store" May 27 03:02:17.419047 kubelet[2281]: I0527 03:02:17.419022 2281 policy_none.go:49] "None policy: Start" May 27 03:02:17.419047 kubelet[2281]: I0527 03:02:17.419052 2281 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:02:17.419166 kubelet[2281]: I0527 03:02:17.419064 2281 state_mem.go:35] "Initializing new in-memory state store" May 27 03:02:17.424608 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:02:17.439192 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:02:17.442441 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:02:17.453874 kubelet[2281]: I0527 03:02:17.453727 2281 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:02:17.454166 kubelet[2281]: I0527 03:02:17.453977 2281 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:02:17.454166 kubelet[2281]: I0527 03:02:17.453998 2281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:02:17.454610 kubelet[2281]: I0527 03:02:17.454271 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:02:17.455022 kubelet[2281]: E0527 03:02:17.454995 2281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:02:17.455078 kubelet[2281]: E0527 03:02:17.455051 2281 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:02:17.521071 systemd[1]: Created slice kubepods-burstable-pod7eed27e75a64121672a3ac2791a71f55.slice - libcontainer container kubepods-burstable-pod7eed27e75a64121672a3ac2791a71f55.slice. May 27 03:02:17.531782 kubelet[2281]: E0527 03:02:17.531742 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:17.535172 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 03:02:17.551284 kubelet[2281]: E0527 03:02:17.551248 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:17.553621 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 03:02:17.555347 kubelet[2281]: E0527 03:02:17.555318 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:17.556130 kubelet[2281]: I0527 03:02:17.556100 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:02:17.556559 kubelet[2281]: E0527 03:02:17.556520 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 27 03:02:17.599875 kubelet[2281]: E0527 03:02:17.599061 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" May 27 03:02:17.599875 kubelet[2281]: I0527 03:02:17.599166 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:17.599875 kubelet[2281]: I0527 03:02:17.599211 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:02:17.599875 kubelet[2281]: I0527 03:02:17.599244 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:17.599875 kubelet[2281]: I0527 03:02:17.599309 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:17.600044 kubelet[2281]: I0527 03:02:17.599329 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:17.600044 kubelet[2281]: I0527 03:02:17.599354 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:17.600044 kubelet[2281]: I0527 03:02:17.599370 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:17.600044 kubelet[2281]: I0527 03:02:17.599384 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:17.600044 kubelet[2281]: I0527 03:02:17.599399 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:17.757763 kubelet[2281]: I0527 03:02:17.757728 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:02:17.758237 kubelet[2281]: E0527 03:02:17.758210 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 27 03:02:17.796875 kubelet[2281]: E0527 03:02:17.796753 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843432c2e81456c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:02:17.387730284 +0000 UTC m=+1.223792000,LastTimestamp:2025-05-27 03:02:17.387730284 +0000 UTC m=+1.223792000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:02:17.833234 containerd[1530]: time="2025-05-27T03:02:17.833180494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7eed27e75a64121672a3ac2791a71f55,Namespace:kube-system,Attempt:0,}" May 27 03:02:17.852287 containerd[1530]: time="2025-05-27T03:02:17.852143459Z" level=info msg="connecting to shim 232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca" address="unix:///run/containerd/s/1effd829abd7a7ccf95f27c373b8a6b1e1f0b44557d0e7d0ca7228efb9939d06" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:17.852516 containerd[1530]: time="2025-05-27T03:02:17.852492418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 03:02:17.859278 containerd[1530]: time="2025-05-27T03:02:17.859177955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 03:02:17.881257 containerd[1530]: time="2025-05-27T03:02:17.881208344Z" level=info msg="connecting to shim 1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508" address="unix:///run/containerd/s/07f67313f038f34336c76876f6df76b3cb33327ee6b2eec2d35e0f2381973cc9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:17.882032 systemd[1]: Started cri-containerd-232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca.scope - libcontainer container 232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca. May 27 03:02:17.889416 containerd[1530]: time="2025-05-27T03:02:17.888777701Z" level=info msg="connecting to shim ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7" address="unix:///run/containerd/s/db0a796271cbd859701eb5a90b1847a10888e8e65eb00496a7f5c35188544bab" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:17.909345 systemd[1]: Started cri-containerd-1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508.scope - libcontainer container 1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508. May 27 03:02:17.912591 systemd[1]: Started cri-containerd-ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7.scope - libcontainer container ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7. May 27 03:02:17.926778 containerd[1530]: time="2025-05-27T03:02:17.926700224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7eed27e75a64121672a3ac2791a71f55,Namespace:kube-system,Attempt:0,} returns sandbox id \"232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca\"" May 27 03:02:17.933005 containerd[1530]: time="2025-05-27T03:02:17.932961223Z" level=info msg="CreateContainer within sandbox \"232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:02:17.939555 containerd[1530]: time="2025-05-27T03:02:17.939489872Z" level=info msg="Container 9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:17.947255 containerd[1530]: time="2025-05-27T03:02:17.947215676Z" level=info msg="CreateContainer within sandbox \"232ad80be36e3f0c2af26c5bb405e5836540a92abd86829ef4c506456d83f0ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3\"" May 27 03:02:17.948119 containerd[1530]: time="2025-05-27T03:02:17.948080702Z" level=info msg="StartContainer for \"9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3\"" May 27 03:02:17.949997 containerd[1530]: time="2025-05-27T03:02:17.949965317Z" level=info msg="connecting to shim 9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3" address="unix:///run/containerd/s/1effd829abd7a7ccf95f27c373b8a6b1e1f0b44557d0e7d0ca7228efb9939d06" protocol=ttrpc version=3 May 27 03:02:17.950384 containerd[1530]: time="2025-05-27T03:02:17.950354751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508\"" May 27 03:02:17.953639 containerd[1530]: time="2025-05-27T03:02:17.953600621Z" level=info msg="CreateContainer within sandbox \"1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:02:17.955266 containerd[1530]: time="2025-05-27T03:02:17.955181760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7\"" May 27 03:02:17.958089 containerd[1530]: time="2025-05-27T03:02:17.958056911Z" level=info msg="CreateContainer within sandbox \"ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:02:17.960572 containerd[1530]: time="2025-05-27T03:02:17.960540344Z" level=info msg="Container f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:17.968365 containerd[1530]: time="2025-05-27T03:02:17.968305179Z" level=info msg="Container e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:17.972042 systemd[1]: Started cri-containerd-9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3.scope - libcontainer container 9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3. May 27 03:02:17.977686 containerd[1530]: time="2025-05-27T03:02:17.977639532Z" level=info msg="CreateContainer within sandbox \"1f6e2b9debd19bc7590f5a24f3f159d424bd0acaa9744e4e907eb73a50bf7508\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43\"" May 27 03:02:17.978443 containerd[1530]: time="2025-05-27T03:02:17.978127146Z" level=info msg="StartContainer for \"f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43\"" May 27 03:02:17.978443 containerd[1530]: time="2025-05-27T03:02:17.978143055Z" level=info msg="CreateContainer within sandbox \"ab9c215b30d6e9c5d0ccb1ee906e5b5cebfce925d6eb382a151ce1eb49aec2e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80\"" May 27 03:02:17.978673 containerd[1530]: time="2025-05-27T03:02:17.978648742Z" level=info msg="StartContainer for \"e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80\"" May 27 03:02:17.979792 containerd[1530]: time="2025-05-27T03:02:17.979758136Z" level=info msg="connecting to shim e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80" address="unix:///run/containerd/s/db0a796271cbd859701eb5a90b1847a10888e8e65eb00496a7f5c35188544bab" protocol=ttrpc version=3 May 27 03:02:17.980323 containerd[1530]: time="2025-05-27T03:02:17.980231444Z" level=info msg="connecting to shim f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43" address="unix:///run/containerd/s/07f67313f038f34336c76876f6df76b3cb33327ee6b2eec2d35e0f2381973cc9" protocol=ttrpc version=3 May 27 03:02:17.999463 kubelet[2281]: E0527 03:02:17.999427 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" May 27 03:02:18.001014 systemd[1]: Started cri-containerd-f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43.scope - libcontainer container f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43. May 27 03:02:18.004307 systemd[1]: Started cri-containerd-e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80.scope - libcontainer container e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80. May 27 03:02:18.015128 containerd[1530]: time="2025-05-27T03:02:18.014790765Z" level=info msg="StartContainer for \"9b1a0aeffdeb8b2174abed84f1c19b4d9ffd9e2c3b79a18c4eb9e367f7933fc3\" returns successfully" May 27 03:02:18.061182 containerd[1530]: time="2025-05-27T03:02:18.060896337Z" level=info msg="StartContainer for \"f2370378fd5360d3c7d33289658d4669b8ccc84034a87e8e41774dc49c285d43\" returns successfully" May 27 03:02:18.083396 containerd[1530]: time="2025-05-27T03:02:18.082961425Z" level=info msg="StartContainer for \"e626a7ee6965b74da2972ed08c8391a37267677fd9b60abbba3e5ab11aae6d80\" returns successfully" May 27 03:02:18.160788 kubelet[2281]: I0527 03:02:18.160683 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:02:18.161296 kubelet[2281]: E0527 03:02:18.161069 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" May 27 03:02:18.418511 kubelet[2281]: E0527 03:02:18.418401 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:18.421230 kubelet[2281]: E0527 03:02:18.421156 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:18.423778 kubelet[2281]: E0527 03:02:18.423756 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:18.964032 kubelet[2281]: I0527 03:02:18.963570 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:02:19.430057 kubelet[2281]: E0527 03:02:19.429388 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:19.430057 kubelet[2281]: E0527 03:02:19.429783 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:02:19.638971 kubelet[2281]: E0527 03:02:19.638938 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:02:19.724068 kubelet[2281]: I0527 03:02:19.723933 2281 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:02:19.724068 kubelet[2281]: E0527 03:02:19.723976 2281 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:02:19.799060 kubelet[2281]: I0527 03:02:19.799016 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:02:19.808346 kubelet[2281]: E0527 03:02:19.808049 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:02:19.808346 kubelet[2281]: I0527 03:02:19.808083 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:02:19.811388 kubelet[2281]: E0527 03:02:19.810621 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:02:19.811388 kubelet[2281]: I0527 03:02:19.810655 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:02:19.812850 kubelet[2281]: E0527 03:02:19.812809 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:02:20.384201 kubelet[2281]: I0527 03:02:20.384140 2281 apiserver.go:52] "Watching apiserver" May 27 03:02:20.398977 kubelet[2281]: I0527 03:02:20.398899 2281 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:02:20.880291 kubelet[2281]: I0527 03:02:20.880187 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:02:21.618747 systemd[1]: Reload requested from client PID 2554 ('systemctl') (unit session-7.scope)... May 27 03:02:21.618763 systemd[1]: Reloading... May 27 03:02:21.680932 zram_generator::config[2597]: No configuration found. May 27 03:02:21.822545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:02:21.920230 systemd[1]: Reloading finished in 301 ms. May 27 03:02:21.947455 kubelet[2281]: I0527 03:02:21.947254 2281 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:02:21.947387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:21.966925 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:02:21.967232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:21.967325 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 130.1M memory peak. May 27 03:02:21.969458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:02:22.106025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:02:22.110907 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:02:22.152667 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:02:22.152667 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:02:22.152667 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:02:22.153125 kubelet[2639]: I0527 03:02:22.152744 2639 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:02:22.163577 kubelet[2639]: I0527 03:02:22.162269 2639 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:02:22.163577 kubelet[2639]: I0527 03:02:22.162303 2639 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:02:22.163577 kubelet[2639]: I0527 03:02:22.162734 2639 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:02:22.164567 kubelet[2639]: I0527 03:02:22.164541 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:02:22.167628 kubelet[2639]: I0527 03:02:22.167009 2639 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:02:22.172519 kubelet[2639]: I0527 03:02:22.172390 2639 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:02:22.175214 kubelet[2639]: I0527 03:02:22.175188 2639 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:02:22.175405 kubelet[2639]: I0527 03:02:22.175374 2639 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:02:22.175593 kubelet[2639]: I0527 03:02:22.175404 2639 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:02:22.175673 kubelet[2639]: I0527 03:02:22.175604 2639 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:02:22.175673 kubelet[2639]: I0527 03:02:22.175614 2639 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:02:22.175673 kubelet[2639]: I0527 03:02:22.175658 2639 state_mem.go:36] "Initialized new in-memory state store" May 27 03:02:22.175799 kubelet[2639]: I0527 03:02:22.175787 2639 kubelet.go:446] "Attempting to sync node with API server" May 27 03:02:22.175846 kubelet[2639]: I0527 03:02:22.175802 2639 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:02:22.175846 kubelet[2639]: I0527 03:02:22.175838 2639 kubelet.go:352] "Adding apiserver pod source" May 27 03:02:22.175901 kubelet[2639]: I0527 03:02:22.175851 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:02:22.176854 kubelet[2639]: I0527 03:02:22.176755 2639 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:02:22.177942 kubelet[2639]: I0527 03:02:22.177914 2639 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:02:22.178848 kubelet[2639]: I0527 03:02:22.178511 2639 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:02:22.178848 kubelet[2639]: I0527 03:02:22.178551 2639 server.go:1287] "Started kubelet" May 27 03:02:22.179105 kubelet[2639]: I0527 03:02:22.179059 2639 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:02:22.180860 kubelet[2639]: I0527 03:02:22.179110 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:02:22.181080 kubelet[2639]: I0527 03:02:22.181043 2639 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:02:22.182528 kubelet[2639]: I0527 03:02:22.182509 2639 server.go:479] "Adding debug handlers to kubelet server" May 27 03:02:22.184840 kubelet[2639]: I0527 03:02:22.183157 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:02:22.187128 kubelet[2639]: I0527 03:02:22.186687 2639 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:02:22.189864 kubelet[2639]: I0527 03:02:22.189839 2639 factory.go:221] Registration of the systemd container factory successfully May 27 03:02:22.190701 kubelet[2639]: I0527 03:02:22.189970 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:02:22.191877 kubelet[2639]: I0527 03:02:22.191660 2639 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:02:22.191877 kubelet[2639]: E0527 03:02:22.191776 2639 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:02:22.200441 kubelet[2639]: I0527 03:02:22.200410 2639 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:02:22.200591 kubelet[2639]: I0527 03:02:22.200577 2639 reconciler.go:26] "Reconciler: start to sync state" May 27 03:02:22.201562 kubelet[2639]: I0527 03:02:22.201358 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:02:22.202436 kubelet[2639]: I0527 03:02:22.202286 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:02:22.202436 kubelet[2639]: I0527 03:02:22.202417 2639 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:02:22.202525 kubelet[2639]: I0527 03:02:22.202439 2639 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:02:22.202525 kubelet[2639]: I0527 03:02:22.202455 2639 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:02:22.202525 kubelet[2639]: I0527 03:02:22.202504 2639 factory.go:221] Registration of the containerd container factory successfully May 27 03:02:22.203437 kubelet[2639]: E0527 03:02:22.202518 2639 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:02:22.205176 kubelet[2639]: E0527 03:02:22.204376 2639 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:02:22.251616 kubelet[2639]: I0527 03:02:22.251587 2639 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:02:22.251616 kubelet[2639]: I0527 03:02:22.251607 2639 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:02:22.251616 kubelet[2639]: I0527 03:02:22.251630 2639 state_mem.go:36] "Initialized new in-memory state store" May 27 03:02:22.251820 kubelet[2639]: I0527 03:02:22.251802 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:02:22.251891 kubelet[2639]: I0527 03:02:22.251818 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:02:22.251891 kubelet[2639]: I0527 03:02:22.251859 2639 policy_none.go:49] "None policy: Start" May 27 03:02:22.251891 kubelet[2639]: I0527 03:02:22.251868 2639 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:02:22.251891 kubelet[2639]: I0527 03:02:22.251877 2639 state_mem.go:35] "Initializing new in-memory state store" May 27 03:02:22.251985 kubelet[2639]: I0527 03:02:22.251972 2639 state_mem.go:75] "Updated machine memory state" May 27 03:02:22.256433 kubelet[2639]: I0527 03:02:22.256412 2639 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:02:22.256614 kubelet[2639]: I0527 03:02:22.256601 2639 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:02:22.256663 kubelet[2639]: I0527 03:02:22.256619 2639 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:02:22.256868 kubelet[2639]: I0527 03:02:22.256847 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:02:22.257791 kubelet[2639]: E0527 03:02:22.257689 2639 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:02:22.303133 kubelet[2639]: I0527 03:02:22.303085 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:02:22.304217 kubelet[2639]: I0527 03:02:22.304193 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:02:22.305336 kubelet[2639]: I0527 03:02:22.304694 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.310085 kubelet[2639]: E0527 03:02:22.310050 2639 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.360698 kubelet[2639]: I0527 03:02:22.360648 2639 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:02:22.368691 kubelet[2639]: I0527 03:02:22.368653 2639 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:02:22.369108 kubelet[2639]: I0527 03:02:22.369090 2639 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:02:22.401718 kubelet[2639]: I0527 03:02:22.401641 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:22.401718 kubelet[2639]: I0527 03:02:22.401686 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:22.401718 kubelet[2639]: I0527 03:02:22.401710 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.402032 kubelet[2639]: I0527 03:02:22.401742 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.402032 kubelet[2639]: I0527 03:02:22.401761 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.402032 kubelet[2639]: I0527 03:02:22.401777 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.402032 kubelet[2639]: I0527 03:02:22.401790 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:02:22.402032 kubelet[2639]: I0527 03:02:22.401806 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eed27e75a64121672a3ac2791a71f55-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7eed27e75a64121672a3ac2791a71f55\") " pod="kube-system/kube-apiserver-localhost" May 27 03:02:22.402173 kubelet[2639]: I0527 03:02:22.401820 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:02:22.624907 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 03:02:22.625188 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 03:02:23.078573 sudo[2673]: pam_unix(sudo:session): session closed for user root May 27 03:02:23.177113 kubelet[2639]: I0527 03:02:23.177071 2639 apiserver.go:52] "Watching apiserver" May 27 03:02:23.200840 kubelet[2639]: I0527 03:02:23.200806 2639 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:02:23.232498 kubelet[2639]: I0527 03:02:23.232201 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:02:23.232498 kubelet[2639]: I0527 03:02:23.232216 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:02:23.237516 kubelet[2639]: E0527 03:02:23.237459 2639 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:02:23.238871 kubelet[2639]: E0527 03:02:23.238665 2639 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:02:23.252988 kubelet[2639]: I0527 03:02:23.252938 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.252924184 podStartE2EDuration="1.252924184s" podCreationTimestamp="2025-05-27 03:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:23.251598372 +0000 UTC m=+1.137404522" watchObservedRunningTime="2025-05-27 03:02:23.252924184 +0000 UTC m=+1.138730334" May 27 03:02:23.272846 kubelet[2639]: I0527 03:02:23.271657 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.271641495 podStartE2EDuration="3.271641495s" podCreationTimestamp="2025-05-27 03:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:23.268155433 +0000 UTC m=+1.153961583" watchObservedRunningTime="2025-05-27 03:02:23.271641495 +0000 UTC m=+1.157447685" May 27 03:02:23.293492 kubelet[2639]: I0527 03:02:23.293435 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2934164940000001 podStartE2EDuration="1.293416494s" podCreationTimestamp="2025-05-27 03:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:23.283203294 +0000 UTC m=+1.169009444" watchObservedRunningTime="2025-05-27 03:02:23.293416494 +0000 UTC m=+1.179222604" May 27 03:02:24.918569 sudo[1734]: pam_unix(sudo:session): session closed for user root May 27 03:02:24.920881 sshd[1733]: Connection closed by 10.0.0.1 port 37536 May 27 03:02:24.921448 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 03:02:24.925189 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:37536.service: Deactivated successfully. May 27 03:02:24.927161 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:02:24.927921 systemd[1]: session-7.scope: Consumed 6.567s CPU time, 265.8M memory peak. May 27 03:02:24.929331 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. May 27 03:02:24.931335 systemd-logind[1516]: Removed session 7. May 27 03:02:27.610864 kubelet[2639]: I0527 03:02:27.610676 2639 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:02:27.611332 kubelet[2639]: I0527 03:02:27.611196 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:02:27.611364 containerd[1530]: time="2025-05-27T03:02:27.611011321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:02:28.369975 systemd[1]: Created slice kubepods-besteffort-pod61293304_2fc7_4304_8d02_6046bee10710.slice - libcontainer container kubepods-besteffort-pod61293304_2fc7_4304_8d02_6046bee10710.slice. May 27 03:02:28.383140 systemd[1]: Created slice kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice - libcontainer container kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice. May 27 03:02:28.437842 kubelet[2639]: I0527 03:02:28.437552 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-net\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.437842 kubelet[2639]: I0527 03:02:28.437601 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-kube-api-access-mk54c\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.437842 kubelet[2639]: I0527 03:02:28.437634 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61293304-2fc7-4304-8d02-6046bee10710-lib-modules\") pod \"kube-proxy-jrg9t\" (UID: \"61293304-2fc7-4304-8d02-6046bee10710\") " pod="kube-system/kube-proxy-jrg9t" May 27 03:02:28.437842 kubelet[2639]: I0527 03:02:28.437650 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-bpf-maps\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.437842 kubelet[2639]: I0527 03:02:28.437675 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40b02fd-5261-4846-8b54-804951593ddf-cilium-config-path\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438065 kubelet[2639]: I0527 03:02:28.437691 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61293304-2fc7-4304-8d02-6046bee10710-kube-proxy\") pod \"kube-proxy-jrg9t\" (UID: \"61293304-2fc7-4304-8d02-6046bee10710\") " pod="kube-system/kube-proxy-jrg9t" May 27 03:02:28.438065 kubelet[2639]: I0527 03:02:28.437706 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-kernel\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438065 kubelet[2639]: I0527 03:02:28.437720 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-run\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438065 kubelet[2639]: I0527 03:02:28.437736 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40b02fd-5261-4846-8b54-804951593ddf-clustermesh-secrets\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438065 kubelet[2639]: I0527 03:02:28.437778 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61293304-2fc7-4304-8d02-6046bee10710-xtables-lock\") pod \"kube-proxy-jrg9t\" (UID: \"61293304-2fc7-4304-8d02-6046bee10710\") " pod="kube-system/kube-proxy-jrg9t" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437848 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mflv4\" (UniqueName: \"kubernetes.io/projected/61293304-2fc7-4304-8d02-6046bee10710-kube-api-access-mflv4\") pod \"kube-proxy-jrg9t\" (UID: \"61293304-2fc7-4304-8d02-6046bee10710\") " pod="kube-system/kube-proxy-jrg9t" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437870 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-cgroup\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437888 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-xtables-lock\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437903 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cni-path\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437918 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-hostproc\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438208 kubelet[2639]: I0527 03:02:28.437933 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-etc-cni-netd\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438471 kubelet[2639]: I0527 03:02:28.437949 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-hubble-tls\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.438471 kubelet[2639]: I0527 03:02:28.437966 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-lib-modules\") pod \"cilium-mfwvd\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " pod="kube-system/cilium-mfwvd" May 27 03:02:28.681485 containerd[1530]: time="2025-05-27T03:02:28.681370576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrg9t,Uid:61293304-2fc7-4304-8d02-6046bee10710,Namespace:kube-system,Attempt:0,}" May 27 03:02:28.686341 containerd[1530]: time="2025-05-27T03:02:28.686180716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfwvd,Uid:f40b02fd-5261-4846-8b54-804951593ddf,Namespace:kube-system,Attempt:0,}" May 27 03:02:28.706123 containerd[1530]: time="2025-05-27T03:02:28.706065177Z" level=info msg="connecting to shim 12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69" address="unix:///run/containerd/s/299d14d18c26ab5dcdc03b839d384c06016f5feccebec5e3e97e642587e21fda" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:28.718867 containerd[1530]: time="2025-05-27T03:02:28.718528582Z" level=info msg="connecting to shim ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:28.733464 kubelet[2639]: I0527 03:02:28.733412 2639 status_manager.go:890] "Failed to get status for pod" podUID="c8eba849-6659-4982-bd1f-1c8e39974902" pod="kube-system/cilium-operator-6c4d7847fc-zrzgc" err="pods \"cilium-operator-6c4d7847fc-zrzgc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 27 03:02:28.736918 systemd[1]: Created slice kubepods-besteffort-podc8eba849_6659_4982_bd1f_1c8e39974902.slice - libcontainer container kubepods-besteffort-podc8eba849_6659_4982_bd1f_1c8e39974902.slice. May 27 03:02:28.739596 kubelet[2639]: I0527 03:02:28.739480 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8eba849-6659-4982-bd1f-1c8e39974902-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zrzgc\" (UID: \"c8eba849-6659-4982-bd1f-1c8e39974902\") " pod="kube-system/cilium-operator-6c4d7847fc-zrzgc" May 27 03:02:28.740322 kubelet[2639]: I0527 03:02:28.740138 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686v2\" (UniqueName: \"kubernetes.io/projected/c8eba849-6659-4982-bd1f-1c8e39974902-kube-api-access-686v2\") pod \"cilium-operator-6c4d7847fc-zrzgc\" (UID: \"c8eba849-6659-4982-bd1f-1c8e39974902\") " pod="kube-system/cilium-operator-6c4d7847fc-zrzgc" May 27 03:02:28.765092 systemd[1]: Started cri-containerd-12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69.scope - libcontainer container 12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69. May 27 03:02:28.772100 systemd[1]: Started cri-containerd-ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad.scope - libcontainer container ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad. May 27 03:02:28.801797 containerd[1530]: time="2025-05-27T03:02:28.801754352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jrg9t,Uid:61293304-2fc7-4304-8d02-6046bee10710,Namespace:kube-system,Attempt:0,} returns sandbox id \"12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69\"" May 27 03:02:28.804744 containerd[1530]: time="2025-05-27T03:02:28.804690922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfwvd,Uid:f40b02fd-5261-4846-8b54-804951593ddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\"" May 27 03:02:28.811940 containerd[1530]: time="2025-05-27T03:02:28.811894461Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 03:02:28.813607 containerd[1530]: time="2025-05-27T03:02:28.813555960Z" level=info msg="CreateContainer within sandbox \"12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:02:28.828264 containerd[1530]: time="2025-05-27T03:02:28.827383636Z" level=info msg="Container 43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:28.834554 containerd[1530]: time="2025-05-27T03:02:28.834494252Z" level=info msg="CreateContainer within sandbox \"12ca853a8719b1266764d02cc15faa7ee948b05048510635b7c85ba626f5ee69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e\"" May 27 03:02:28.835153 containerd[1530]: time="2025-05-27T03:02:28.835122138Z" level=info msg="StartContainer for \"43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e\"" May 27 03:02:28.837370 containerd[1530]: time="2025-05-27T03:02:28.836701123Z" level=info msg="connecting to shim 43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e" address="unix:///run/containerd/s/299d14d18c26ab5dcdc03b839d384c06016f5feccebec5e3e97e642587e21fda" protocol=ttrpc version=3 May 27 03:02:28.864069 systemd[1]: Started cri-containerd-43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e.scope - libcontainer container 43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e. May 27 03:02:28.902379 containerd[1530]: time="2025-05-27T03:02:28.901322748Z" level=info msg="StartContainer for \"43bfaa1faddca367bcb79e1f3254c5858c81e263403383e40cd4f06641e5c63e\" returns successfully" May 27 03:02:29.044068 containerd[1530]: time="2025-05-27T03:02:29.043956528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zrzgc,Uid:c8eba849-6659-4982-bd1f-1c8e39974902,Namespace:kube-system,Attempt:0,}" May 27 03:02:29.060017 containerd[1530]: time="2025-05-27T03:02:29.059472534Z" level=info msg="connecting to shim 17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771" address="unix:///run/containerd/s/5f3d517c9d374fc51926c9d084db59d274419b8ce56b5f3f27c58a67a3f310d5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:29.086275 systemd[1]: Started cri-containerd-17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771.scope - libcontainer container 17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771. May 27 03:02:29.120457 containerd[1530]: time="2025-05-27T03:02:29.120396750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zrzgc,Uid:c8eba849-6659-4982-bd1f-1c8e39974902,Namespace:kube-system,Attempt:0,} returns sandbox id \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\"" May 27 03:02:29.256460 kubelet[2639]: I0527 03:02:29.256398 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jrg9t" podStartSLOduration=1.256379218 podStartE2EDuration="1.256379218s" podCreationTimestamp="2025-05-27 03:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:29.256159032 +0000 UTC m=+7.141965182" watchObservedRunningTime="2025-05-27 03:02:29.256379218 +0000 UTC m=+7.142185368" May 27 03:02:34.065485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106938159.mount: Deactivated successfully. May 27 03:02:35.301124 containerd[1530]: time="2025-05-27T03:02:35.300980872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:35.302217 containerd[1530]: time="2025-05-27T03:02:35.301966518Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 03:02:35.305878 containerd[1530]: time="2025-05-27T03:02:35.305842465Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:35.307264 containerd[1530]: time="2025-05-27T03:02:35.307231503Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.495292124s" May 27 03:02:35.307365 containerd[1530]: time="2025-05-27T03:02:35.307349891Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 03:02:35.314819 containerd[1530]: time="2025-05-27T03:02:35.314779479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:02:35.328135 containerd[1530]: time="2025-05-27T03:02:35.328084643Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:02:35.333941 containerd[1530]: time="2025-05-27T03:02:35.333901624Z" level=info msg="Container 973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:35.337985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195222.mount: Deactivated successfully. May 27 03:02:35.341401 containerd[1530]: time="2025-05-27T03:02:35.341367153Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\"" May 27 03:02:35.343835 containerd[1530]: time="2025-05-27T03:02:35.343800631Z" level=info msg="StartContainer for \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\"" May 27 03:02:35.344737 containerd[1530]: time="2025-05-27T03:02:35.344709753Z" level=info msg="connecting to shim 973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" protocol=ttrpc version=3 May 27 03:02:35.408854 systemd[1]: Started cri-containerd-973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825.scope - libcontainer container 973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825. May 27 03:02:35.438008 containerd[1530]: time="2025-05-27T03:02:35.437965646Z" level=info msg="StartContainer for \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" returns successfully" May 27 03:02:35.486707 systemd[1]: cri-containerd-973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825.scope: Deactivated successfully. May 27 03:02:35.516648 containerd[1530]: time="2025-05-27T03:02:35.516339909Z" level=info msg="received exit event container_id:\"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" id:\"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" pid:3055 exited_at:{seconds:1748314955 nanos:505996648}" May 27 03:02:35.535501 containerd[1530]: time="2025-05-27T03:02:35.535441123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" id:\"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" pid:3055 exited_at:{seconds:1748314955 nanos:505996648}" May 27 03:02:35.568651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825-rootfs.mount: Deactivated successfully. May 27 03:02:36.265388 containerd[1530]: time="2025-05-27T03:02:36.265245113Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:02:36.286370 containerd[1530]: time="2025-05-27T03:02:36.279462370Z" level=info msg="Container 674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:36.296351 containerd[1530]: time="2025-05-27T03:02:36.295966179Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\"" May 27 03:02:36.296781 containerd[1530]: time="2025-05-27T03:02:36.296757725Z" level=info msg="StartContainer for \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\"" May 27 03:02:36.299661 containerd[1530]: time="2025-05-27T03:02:36.299623389Z" level=info msg="connecting to shim 674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" protocol=ttrpc version=3 May 27 03:02:36.320119 systemd[1]: Started cri-containerd-674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0.scope - libcontainer container 674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0. May 27 03:02:36.385838 containerd[1530]: time="2025-05-27T03:02:36.384741112Z" level=info msg="StartContainer for \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" returns successfully" May 27 03:02:36.386190 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:02:36.386402 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:02:36.387305 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 03:02:36.389427 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:02:36.391924 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:02:36.398551 systemd[1]: cri-containerd-674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0.scope: Deactivated successfully. May 27 03:02:36.409258 containerd[1530]: time="2025-05-27T03:02:36.409219015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" id:\"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" pid:3105 exited_at:{seconds:1748314956 nanos:408220958}" May 27 03:02:36.412175 containerd[1530]: time="2025-05-27T03:02:36.412116656Z" level=info msg="received exit event container_id:\"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" id:\"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" pid:3105 exited_at:{seconds:1748314956 nanos:408220958}" May 27 03:02:36.428435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:02:36.441884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0-rootfs.mount: Deactivated successfully. May 27 03:02:36.745498 containerd[1530]: time="2025-05-27T03:02:36.745443340Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:36.746061 containerd[1530]: time="2025-05-27T03:02:36.745993597Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 03:02:36.746623 containerd[1530]: time="2025-05-27T03:02:36.746586156Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:02:36.748345 containerd[1530]: time="2025-05-27T03:02:36.748309004Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.433450119s" May 27 03:02:36.748387 containerd[1530]: time="2025-05-27T03:02:36.748345984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 03:02:36.750596 containerd[1530]: time="2025-05-27T03:02:36.750550611Z" level=info msg="CreateContainer within sandbox \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 03:02:36.756856 containerd[1530]: time="2025-05-27T03:02:36.756342410Z" level=info msg="Container b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:36.763191 containerd[1530]: time="2025-05-27T03:02:36.763148596Z" level=info msg="CreateContainer within sandbox \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\"" May 27 03:02:36.764072 containerd[1530]: time="2025-05-27T03:02:36.764035074Z" level=info msg="StartContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\"" May 27 03:02:36.765218 containerd[1530]: time="2025-05-27T03:02:36.765180571Z" level=info msg="connecting to shim b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e" address="unix:///run/containerd/s/5f3d517c9d374fc51926c9d084db59d274419b8ce56b5f3f27c58a67a3f310d5" protocol=ttrpc version=3 May 27 03:02:36.795039 systemd[1]: Started cri-containerd-b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e.scope - libcontainer container b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e. May 27 03:02:36.822616 containerd[1530]: time="2025-05-27T03:02:36.822582126Z" level=info msg="StartContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" returns successfully" May 27 03:02:37.272706 containerd[1530]: time="2025-05-27T03:02:37.272662427Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:02:37.282413 kubelet[2639]: I0527 03:02:37.282230 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zrzgc" podStartSLOduration=1.656283457 podStartE2EDuration="9.282209527s" podCreationTimestamp="2025-05-27 03:02:28 +0000 UTC" firstStartedPulling="2025-05-27 03:02:29.122983097 +0000 UTC m=+7.008789207" lastFinishedPulling="2025-05-27 03:02:36.748909127 +0000 UTC m=+14.634715277" observedRunningTime="2025-05-27 03:02:37.279152904 +0000 UTC m=+15.164959054" watchObservedRunningTime="2025-05-27 03:02:37.282209527 +0000 UTC m=+15.168015677" May 27 03:02:37.284393 containerd[1530]: time="2025-05-27T03:02:37.284344765Z" level=info msg="Container 076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:37.298122 containerd[1530]: time="2025-05-27T03:02:37.298070816Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\"" May 27 03:02:37.302789 containerd[1530]: time="2025-05-27T03:02:37.298816393Z" level=info msg="StartContainer for \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\"" May 27 03:02:37.302789 containerd[1530]: time="2025-05-27T03:02:37.300271568Z" level=info msg="connecting to shim 076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" protocol=ttrpc version=3 May 27 03:02:37.357326 systemd[1]: Started cri-containerd-076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3.scope - libcontainer container 076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3. May 27 03:02:37.410327 containerd[1530]: time="2025-05-27T03:02:37.410138644Z" level=info msg="StartContainer for \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" returns successfully" May 27 03:02:37.423369 systemd[1]: cri-containerd-076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3.scope: Deactivated successfully. May 27 03:02:37.426150 containerd[1530]: time="2025-05-27T03:02:37.426106347Z" level=info msg="received exit event container_id:\"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" id:\"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" pid:3199 exited_at:{seconds:1748314957 nanos:425911128}" May 27 03:02:37.426561 containerd[1530]: time="2025-05-27T03:02:37.426504628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" id:\"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" pid:3199 exited_at:{seconds:1748314957 nanos:425911128}" May 27 03:02:37.466429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3-rootfs.mount: Deactivated successfully. May 27 03:02:38.277360 containerd[1530]: time="2025-05-27T03:02:38.277318922Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:02:38.290579 containerd[1530]: time="2025-05-27T03:02:38.290534698Z" level=info msg="Container 0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:38.292638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156543676.mount: Deactivated successfully. May 27 03:02:38.298800 containerd[1530]: time="2025-05-27T03:02:38.298754349Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\"" May 27 03:02:38.300116 containerd[1530]: time="2025-05-27T03:02:38.300073774Z" level=info msg="StartContainer for \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\"" May 27 03:02:38.301195 containerd[1530]: time="2025-05-27T03:02:38.301168652Z" level=info msg="connecting to shim 0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" protocol=ttrpc version=3 May 27 03:02:38.325034 systemd[1]: Started cri-containerd-0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727.scope - libcontainer container 0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727. May 27 03:02:38.352788 systemd[1]: cri-containerd-0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727.scope: Deactivated successfully. May 27 03:02:38.354516 containerd[1530]: time="2025-05-27T03:02:38.354450035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" id:\"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" pid:3239 exited_at:{seconds:1748314958 nanos:353376807}" May 27 03:02:38.355391 containerd[1530]: time="2025-05-27T03:02:38.355360627Z" level=info msg="received exit event container_id:\"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" id:\"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" pid:3239 exited_at:{seconds:1748314958 nanos:353376807}" May 27 03:02:38.357517 containerd[1530]: time="2025-05-27T03:02:38.357485913Z" level=info msg="StartContainer for \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" returns successfully" May 27 03:02:38.379805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727-rootfs.mount: Deactivated successfully. May 27 03:02:38.387159 containerd[1530]: time="2025-05-27T03:02:38.359262834Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice/cri-containerd-0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727.scope/memory.events\": no such file or directory" May 27 03:02:38.517092 update_engine[1522]: I20250527 03:02:38.517007 1522 update_attempter.cc:509] Updating boot flags... May 27 03:02:39.292756 containerd[1530]: time="2025-05-27T03:02:39.292714507Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:02:39.302555 containerd[1530]: time="2025-05-27T03:02:39.302520940Z" level=info msg="Container 8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:39.309904 containerd[1530]: time="2025-05-27T03:02:39.309854074Z" level=info msg="CreateContainer within sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\"" May 27 03:02:39.310748 containerd[1530]: time="2025-05-27T03:02:39.310361299Z" level=info msg="StartContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\"" May 27 03:02:39.311474 containerd[1530]: time="2025-05-27T03:02:39.311410245Z" level=info msg="connecting to shim 8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c" address="unix:///run/containerd/s/d502eb24ca6ea0ff260efaf50dc1ae2beefc26b85c93e96398caca70a98abf65" protocol=ttrpc version=3 May 27 03:02:39.340000 systemd[1]: Started cri-containerd-8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c.scope - libcontainer container 8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c. May 27 03:02:39.377179 containerd[1530]: time="2025-05-27T03:02:39.377141858Z" level=info msg="StartContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" returns successfully" May 27 03:02:39.506764 containerd[1530]: time="2025-05-27T03:02:39.506717446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" id:\"e315d0293f77a31a133bd3335454fd2c721b22779e96086db8142629cf9c7a93\" pid:3324 exited_at:{seconds:1748314959 nanos:506401866}" May 27 03:02:39.577855 kubelet[2639]: I0527 03:02:39.576366 2639 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:02:39.618806 systemd[1]: Created slice kubepods-burstable-pod3e06d9fa_124f_4001_be8b_030194d27142.slice - libcontainer container kubepods-burstable-pod3e06d9fa_124f_4001_be8b_030194d27142.slice. May 27 03:02:39.622203 kubelet[2639]: I0527 03:02:39.621233 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e06d9fa-124f-4001-be8b-030194d27142-config-volume\") pod \"coredns-668d6bf9bc-2bfs2\" (UID: \"3e06d9fa-124f-4001-be8b-030194d27142\") " pod="kube-system/coredns-668d6bf9bc-2bfs2" May 27 03:02:39.622203 kubelet[2639]: I0527 03:02:39.621272 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p69p\" (UniqueName: \"kubernetes.io/projected/3e06d9fa-124f-4001-be8b-030194d27142-kube-api-access-5p69p\") pod \"coredns-668d6bf9bc-2bfs2\" (UID: \"3e06d9fa-124f-4001-be8b-030194d27142\") " pod="kube-system/coredns-668d6bf9bc-2bfs2" May 27 03:02:39.626633 systemd[1]: Created slice kubepods-burstable-pod9f6d2871_6ac6_4a43_8d97_a5965b6da10c.slice - libcontainer container kubepods-burstable-pod9f6d2871_6ac6_4a43_8d97_a5965b6da10c.slice. May 27 03:02:39.722363 kubelet[2639]: I0527 03:02:39.722313 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f6d2871-6ac6-4a43-8d97-a5965b6da10c-config-volume\") pod \"coredns-668d6bf9bc-gpdbx\" (UID: \"9f6d2871-6ac6-4a43-8d97-a5965b6da10c\") " pod="kube-system/coredns-668d6bf9bc-gpdbx" May 27 03:02:39.722494 kubelet[2639]: I0527 03:02:39.722378 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj26m\" (UniqueName: \"kubernetes.io/projected/9f6d2871-6ac6-4a43-8d97-a5965b6da10c-kube-api-access-qj26m\") pod \"coredns-668d6bf9bc-gpdbx\" (UID: \"9f6d2871-6ac6-4a43-8d97-a5965b6da10c\") " pod="kube-system/coredns-668d6bf9bc-gpdbx" May 27 03:02:39.923772 containerd[1530]: time="2025-05-27T03:02:39.923437316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bfs2,Uid:3e06d9fa-124f-4001-be8b-030194d27142,Namespace:kube-system,Attempt:0,}" May 27 03:02:39.930538 containerd[1530]: time="2025-05-27T03:02:39.930509255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpdbx,Uid:9f6d2871-6ac6-4a43-8d97-a5965b6da10c,Namespace:kube-system,Attempt:0,}" May 27 03:02:40.316422 kubelet[2639]: I0527 03:02:40.316133 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfwvd" podStartSLOduration=5.81286978 podStartE2EDuration="12.316115546s" podCreationTimestamp="2025-05-27 03:02:28 +0000 UTC" firstStartedPulling="2025-05-27 03:02:28.811342403 +0000 UTC m=+6.697148553" lastFinishedPulling="2025-05-27 03:02:35.314588169 +0000 UTC m=+13.200394319" observedRunningTime="2025-05-27 03:02:40.315345105 +0000 UTC m=+18.201151255" watchObservedRunningTime="2025-05-27 03:02:40.316115546 +0000 UTC m=+18.201921656" May 27 03:02:41.601409 systemd-networkd[1440]: cilium_host: Link UP May 27 03:02:41.601926 systemd-networkd[1440]: cilium_net: Link UP May 27 03:02:41.602086 systemd-networkd[1440]: cilium_host: Gained carrier May 27 03:02:41.602212 systemd-networkd[1440]: cilium_net: Gained carrier May 27 03:02:41.683894 systemd-networkd[1440]: cilium_vxlan: Link UP May 27 03:02:41.683900 systemd-networkd[1440]: cilium_vxlan: Gained carrier May 27 03:02:41.940778 systemd-networkd[1440]: cilium_host: Gained IPv6LL May 27 03:02:41.997861 kernel: NET: Registered PF_ALG protocol family May 27 03:02:42.236124 systemd-networkd[1440]: cilium_net: Gained IPv6LL May 27 03:02:42.555556 systemd-networkd[1440]: lxc_health: Link UP May 27 03:02:42.557968 systemd-networkd[1440]: lxc_health: Gained carrier May 27 03:02:43.058343 kernel: eth0: renamed from tmp81261 May 27 03:02:43.057692 systemd-networkd[1440]: lxcf0624657c2ea: Link UP May 27 03:02:43.060873 systemd-networkd[1440]: lxcf0624657c2ea: Gained carrier May 27 03:02:43.062882 systemd-networkd[1440]: lxc813542bfa73f: Link UP May 27 03:02:43.071800 systemd-networkd[1440]: lxc813542bfa73f: Gained carrier May 27 03:02:43.071895 kernel: eth0: renamed from tmp45b77 May 27 03:02:43.708035 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL May 27 03:02:44.603971 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 27 03:02:44.731987 systemd-networkd[1440]: lxcf0624657c2ea: Gained IPv6LL May 27 03:02:44.732265 systemd-networkd[1440]: lxc813542bfa73f: Gained IPv6LL May 27 03:02:46.539316 containerd[1530]: time="2025-05-27T03:02:46.538959915Z" level=info msg="connecting to shim 812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679" address="unix:///run/containerd/s/97e4a0fd4e2c67b76a8665b5596e7d42801069e3b11696790b24d252c76c08a3" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:46.539753 containerd[1530]: time="2025-05-27T03:02:46.539725732Z" level=info msg="connecting to shim 45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87" address="unix:///run/containerd/s/13156eb8b8cb66f0056ded0b965726f5ed293e7971672aa4f0e76b95d92745f6" namespace=k8s.io protocol=ttrpc version=3 May 27 03:02:46.564981 systemd[1]: Started cri-containerd-812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679.scope - libcontainer container 812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679. May 27 03:02:46.568648 systemd[1]: Started cri-containerd-45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87.scope - libcontainer container 45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87. May 27 03:02:46.578697 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:02:46.580651 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:02:46.598487 containerd[1530]: time="2025-05-27T03:02:46.598441681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpdbx,Uid:9f6d2871-6ac6-4a43-8d97-a5965b6da10c,Namespace:kube-system,Attempt:0,} returns sandbox id \"812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679\"" May 27 03:02:46.609270 containerd[1530]: time="2025-05-27T03:02:46.609231450Z" level=info msg="CreateContainer within sandbox \"812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:02:46.620373 containerd[1530]: time="2025-05-27T03:02:46.620338868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2bfs2,Uid:3e06d9fa-124f-4001-be8b-030194d27142,Namespace:kube-system,Attempt:0,} returns sandbox id \"45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87\"" May 27 03:02:46.621764 containerd[1530]: time="2025-05-27T03:02:46.621732062Z" level=info msg="Container 4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:46.623653 containerd[1530]: time="2025-05-27T03:02:46.623284860Z" level=info msg="CreateContainer within sandbox \"45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:02:46.627237 containerd[1530]: time="2025-05-27T03:02:46.627199286Z" level=info msg="CreateContainer within sandbox \"812614520cf20381e6cf31161af6591037a978eaf010f41de4bf37c2babbd679\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d\"" May 27 03:02:46.627905 containerd[1530]: time="2025-05-27T03:02:46.627879318Z" level=info msg="StartContainer for \"4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d\"" May 27 03:02:46.628858 containerd[1530]: time="2025-05-27T03:02:46.628818024Z" level=info msg="connecting to shim 4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d" address="unix:///run/containerd/s/97e4a0fd4e2c67b76a8665b5596e7d42801069e3b11696790b24d252c76c08a3" protocol=ttrpc version=3 May 27 03:02:46.630974 containerd[1530]: time="2025-05-27T03:02:46.630455926Z" level=info msg="Container 692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0: CDI devices from CRI Config.CDIDevices: []" May 27 03:02:46.635941 containerd[1530]: time="2025-05-27T03:02:46.635909627Z" level=info msg="CreateContainer within sandbox \"45b77fe0aed6b53639156708aa42dfff2fa795014f46bedef4af40d521f93d87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0\"" May 27 03:02:46.636523 containerd[1530]: time="2025-05-27T03:02:46.636498034Z" level=info msg="StartContainer for \"692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0\"" May 27 03:02:46.637475 containerd[1530]: time="2025-05-27T03:02:46.637420814Z" level=info msg="connecting to shim 692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0" address="unix:///run/containerd/s/13156eb8b8cb66f0056ded0b965726f5ed293e7971672aa4f0e76b95d92745f6" protocol=ttrpc version=3 May 27 03:02:46.654974 systemd[1]: Started cri-containerd-4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d.scope - libcontainer container 4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d. May 27 03:02:46.658584 systemd[1]: Started cri-containerd-692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0.scope - libcontainer container 692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0. May 27 03:02:46.693900 containerd[1530]: time="2025-05-27T03:02:46.693857280Z" level=info msg="StartContainer for \"692ec90c4fe236eaf879af54c4caf379ecade172fdd07a046fd73704c6967bd0\" returns successfully" May 27 03:02:46.701707 containerd[1530]: time="2025-05-27T03:02:46.698315059Z" level=info msg="StartContainer for \"4a4ae05987574b60edf467f2fe87b1593bf84d9892af99ab213fe3551eb3c88d\" returns successfully" May 27 03:02:47.322649 kubelet[2639]: I0527 03:02:47.322591 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gpdbx" podStartSLOduration=19.322575277 podStartE2EDuration="19.322575277s" podCreationTimestamp="2025-05-27 03:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:47.322195856 +0000 UTC m=+25.208002006" watchObservedRunningTime="2025-05-27 03:02:47.322575277 +0000 UTC m=+25.208381427" May 27 03:02:47.337165 kubelet[2639]: I0527 03:02:47.337091 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2bfs2" podStartSLOduration=19.337071957 podStartE2EDuration="19.337071957s" podCreationTimestamp="2025-05-27 03:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:02:47.337041589 +0000 UTC m=+25.222847779" watchObservedRunningTime="2025-05-27 03:02:47.337071957 +0000 UTC m=+25.222878067" May 27 03:02:47.524631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754125227.mount: Deactivated successfully. May 27 03:02:52.675331 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). May 27 03:02:52.729787 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:02:52.730951 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:02:52.734650 systemd-logind[1516]: New session 8 of user core. May 27 03:02:52.743990 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:02:52.875367 sshd[3978]: Connection closed by 10.0.0.1 port 52756 May 27 03:02:52.875952 sshd-session[3976]: pam_unix(sshd:session): session closed for user core May 27 03:02:52.880111 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:52756.service: Deactivated successfully. May 27 03:02:52.881760 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:02:52.882910 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. May 27 03:02:52.884011 systemd-logind[1516]: Removed session 8. May 27 03:02:57.805259 kubelet[2639]: I0527 03:02:57.805206 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:02:57.889295 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). May 27 03:02:57.947195 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:02:57.948520 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:02:57.955934 systemd-logind[1516]: New session 9 of user core. May 27 03:02:57.969997 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:02:58.094276 sshd[3994]: Connection closed by 10.0.0.1 port 52760 May 27 03:02:58.094955 sshd-session[3992]: pam_unix(sshd:session): session closed for user core May 27 03:02:58.098665 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:52760.service: Deactivated successfully. May 27 03:02:58.100383 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:02:58.101677 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. May 27 03:02:58.103078 systemd-logind[1516]: Removed session 9. May 27 03:03:03.111038 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). May 27 03:03:03.173467 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:03.175028 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:03.180770 systemd-logind[1516]: New session 10 of user core. May 27 03:03:03.186972 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:03:03.301874 sshd[4014]: Connection closed by 10.0.0.1 port 55256 May 27 03:03:03.302556 sshd-session[4012]: pam_unix(sshd:session): session closed for user core May 27 03:03:03.311985 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:55256.service: Deactivated successfully. May 27 03:03:03.314287 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:03:03.315765 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. May 27 03:03:03.318545 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:55258.service - OpenSSH per-connection server daemon (10.0.0.1:55258). May 27 03:03:03.319296 systemd-logind[1516]: Removed session 10. May 27 03:03:03.383285 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:03.384371 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:03.388205 systemd-logind[1516]: New session 11 of user core. May 27 03:03:03.394988 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:03:03.551853 sshd[4030]: Connection closed by 10.0.0.1 port 55258 May 27 03:03:03.552753 sshd-session[4028]: pam_unix(sshd:session): session closed for user core May 27 03:03:03.569981 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:55258.service: Deactivated successfully. May 27 03:03:03.575666 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:03:03.577401 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. May 27 03:03:03.581323 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:55264.service - OpenSSH per-connection server daemon (10.0.0.1:55264). May 27 03:03:03.581795 systemd-logind[1516]: Removed session 11. May 27 03:03:03.646617 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:03.647612 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:03.651862 systemd-logind[1516]: New session 12 of user core. May 27 03:03:03.665980 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:03:03.782917 sshd[4043]: Connection closed by 10.0.0.1 port 55264 May 27 03:03:03.783233 sshd-session[4041]: pam_unix(sshd:session): session closed for user core May 27 03:03:03.786793 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:55264.service: Deactivated successfully. May 27 03:03:03.788893 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:03:03.789742 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. May 27 03:03:03.791141 systemd-logind[1516]: Removed session 12. May 27 03:03:08.813486 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:55266.service - OpenSSH per-connection server daemon (10.0.0.1:55266). May 27 03:03:08.879654 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 55266 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:08.880969 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:08.885381 systemd-logind[1516]: New session 13 of user core. May 27 03:03:08.895999 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:03:09.008758 sshd[4058]: Connection closed by 10.0.0.1 port 55266 May 27 03:03:09.009118 sshd-session[4056]: pam_unix(sshd:session): session closed for user core May 27 03:03:09.012619 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:55266.service: Deactivated successfully. May 27 03:03:09.017177 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:03:09.020850 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. May 27 03:03:09.021896 systemd-logind[1516]: Removed session 13. May 27 03:03:14.021425 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:55274.service - OpenSSH per-connection server daemon (10.0.0.1:55274). May 27 03:03:14.073912 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 55274 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:14.078530 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:14.082538 systemd-logind[1516]: New session 14 of user core. May 27 03:03:14.088989 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:03:14.209434 sshd[4073]: Connection closed by 10.0.0.1 port 55274 May 27 03:03:14.210152 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 27 03:03:14.219892 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:55274.service: Deactivated successfully. May 27 03:03:14.221222 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:03:14.221912 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. May 27 03:03:14.223978 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:55290.service - OpenSSH per-connection server daemon (10.0.0.1:55290). May 27 03:03:14.226481 systemd-logind[1516]: Removed session 14. May 27 03:03:14.282999 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 55290 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:14.284689 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:14.289604 systemd-logind[1516]: New session 15 of user core. May 27 03:03:14.299968 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:03:14.556954 sshd[4089]: Connection closed by 10.0.0.1 port 55290 May 27 03:03:14.557592 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 27 03:03:14.566909 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:55290.service: Deactivated successfully. May 27 03:03:14.568475 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:03:14.572207 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. May 27 03:03:14.573921 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:55304.service - OpenSSH per-connection server daemon (10.0.0.1:55304). May 27 03:03:14.575447 systemd-logind[1516]: Removed session 15. May 27 03:03:14.630407 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 55304 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:14.631889 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:14.635894 systemd-logind[1516]: New session 16 of user core. May 27 03:03:14.647990 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:03:15.423254 sshd[4103]: Connection closed by 10.0.0.1 port 55304 May 27 03:03:15.423771 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 27 03:03:15.431975 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:55304.service: Deactivated successfully. May 27 03:03:15.435114 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:03:15.436651 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. May 27 03:03:15.441622 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:55314.service - OpenSSH per-connection server daemon (10.0.0.1:55314). May 27 03:03:15.443669 systemd-logind[1516]: Removed session 16. May 27 03:03:15.503089 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 55314 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:15.504345 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:15.508807 systemd-logind[1516]: New session 17 of user core. May 27 03:03:15.515995 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:03:15.751263 sshd[4124]: Connection closed by 10.0.0.1 port 55314 May 27 03:03:15.751665 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 27 03:03:15.769479 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:55314.service: Deactivated successfully. May 27 03:03:15.772334 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:03:15.773289 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. May 27 03:03:15.775964 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:55320.service - OpenSSH per-connection server daemon (10.0.0.1:55320). May 27 03:03:15.778106 systemd-logind[1516]: Removed session 17. May 27 03:03:15.847346 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:15.848662 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:15.853708 systemd-logind[1516]: New session 18 of user core. May 27 03:03:15.865017 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:03:15.978865 sshd[4138]: Connection closed by 10.0.0.1 port 55320 May 27 03:03:15.979053 sshd-session[4136]: pam_unix(sshd:session): session closed for user core May 27 03:03:15.982635 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:55320.service: Deactivated successfully. May 27 03:03:15.984572 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:03:15.987688 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. May 27 03:03:15.989286 systemd-logind[1516]: Removed session 18. May 27 03:03:21.000370 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:55326.service - OpenSSH per-connection server daemon (10.0.0.1:55326). May 27 03:03:21.043504 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 55326 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:21.044800 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:21.049370 systemd-logind[1516]: New session 19 of user core. May 27 03:03:21.060016 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:03:21.186041 sshd[4158]: Connection closed by 10.0.0.1 port 55326 May 27 03:03:21.186440 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 27 03:03:21.190055 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. May 27 03:03:21.190401 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:55326.service: Deactivated successfully. May 27 03:03:21.192131 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:03:21.193974 systemd-logind[1516]: Removed session 19. May 27 03:03:26.201172 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:47600.service - OpenSSH per-connection server daemon (10.0.0.1:47600). May 27 03:03:26.261835 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 47600 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:26.263065 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:26.267604 systemd-logind[1516]: New session 20 of user core. May 27 03:03:26.275016 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:03:26.386412 sshd[4175]: Connection closed by 10.0.0.1 port 47600 May 27 03:03:26.386733 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 27 03:03:26.390072 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:47600.service: Deactivated successfully. May 27 03:03:26.392329 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:03:26.393065 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. May 27 03:03:26.394435 systemd-logind[1516]: Removed session 20. May 27 03:03:31.400791 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:47606.service - OpenSSH per-connection server daemon (10.0.0.1:47606). May 27 03:03:31.456471 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 47606 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:31.458010 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:31.461767 systemd-logind[1516]: New session 21 of user core. May 27 03:03:31.474309 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:03:31.595045 sshd[4194]: Connection closed by 10.0.0.1 port 47606 May 27 03:03:31.595385 sshd-session[4192]: pam_unix(sshd:session): session closed for user core May 27 03:03:31.608179 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:47606.service: Deactivated successfully. May 27 03:03:31.610851 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:03:31.611912 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. May 27 03:03:31.615613 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:47622.service - OpenSSH per-connection server daemon (10.0.0.1:47622). May 27 03:03:31.617000 systemd-logind[1516]: Removed session 21. May 27 03:03:31.678939 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 47622 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:31.679718 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:31.684216 systemd-logind[1516]: New session 22 of user core. May 27 03:03:31.711035 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:03:33.904648 containerd[1530]: time="2025-05-27T03:03:33.904545767Z" level=info msg="StopContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" with timeout 30 (s)" May 27 03:03:33.906062 containerd[1530]: time="2025-05-27T03:03:33.905461050Z" level=info msg="Stop container \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" with signal terminated" May 27 03:03:33.917718 systemd[1]: cri-containerd-b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e.scope: Deactivated successfully. May 27 03:03:33.919209 containerd[1530]: time="2025-05-27T03:03:33.919167584Z" level=info msg="received exit event container_id:\"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" id:\"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" pid:3163 exited_at:{seconds:1748315013 nanos:918933213}" May 27 03:03:33.919361 containerd[1530]: time="2025-05-27T03:03:33.919250828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" id:\"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" pid:3163 exited_at:{seconds:1748315013 nanos:918933213}" May 27 03:03:33.939199 containerd[1530]: time="2025-05-27T03:03:33.939145056Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:03:33.940302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e-rootfs.mount: Deactivated successfully. May 27 03:03:33.945498 containerd[1530]: time="2025-05-27T03:03:33.945445157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" id:\"818a3bf0d11a98de19d429a3a2684d506a5544334ead9741eac2b443d1c627ee\" pid:4238 exited_at:{seconds:1748315013 nanos:945186344}" May 27 03:03:33.947319 containerd[1530]: time="2025-05-27T03:03:33.947268164Z" level=info msg="StopContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" with timeout 2 (s)" May 27 03:03:33.947603 containerd[1530]: time="2025-05-27T03:03:33.947582299Z" level=info msg="Stop container \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" with signal terminated" May 27 03:03:33.952141 containerd[1530]: time="2025-05-27T03:03:33.952101314Z" level=info msg="StopContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" returns successfully" May 27 03:03:33.954352 systemd-networkd[1440]: lxc_health: Link DOWN May 27 03:03:33.954691 systemd-networkd[1440]: lxc_health: Lost carrier May 27 03:03:33.959657 containerd[1530]: time="2025-05-27T03:03:33.959610632Z" level=info msg="StopPodSandbox for \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\"" May 27 03:03:33.968214 containerd[1530]: time="2025-05-27T03:03:33.967949430Z" level=info msg="Container to stop \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:33.970961 systemd[1]: cri-containerd-8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c.scope: Deactivated successfully. May 27 03:03:33.971343 systemd[1]: cri-containerd-8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c.scope: Consumed 6.344s CPU time, 121.9M memory peak, 156K read from disk, 12.9M written to disk. May 27 03:03:33.972716 containerd[1530]: time="2025-05-27T03:03:33.972572490Z" level=info msg="received exit event container_id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" pid:3294 exited_at:{seconds:1748315013 nanos:972335959}" May 27 03:03:33.972716 containerd[1530]: time="2025-05-27T03:03:33.972654974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" id:\"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" pid:3294 exited_at:{seconds:1748315013 nanos:972335959}" May 27 03:03:33.975268 systemd[1]: cri-containerd-17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771.scope: Deactivated successfully. May 27 03:03:33.977380 containerd[1530]: time="2025-05-27T03:03:33.977325677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" pid:2878 exit_status:137 exited_at:{seconds:1748315013 nanos:977067944}" May 27 03:03:33.995108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c-rootfs.mount: Deactivated successfully. May 27 03:03:34.002058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771-rootfs.mount: Deactivated successfully. May 27 03:03:34.015876 containerd[1530]: time="2025-05-27T03:03:34.015820256Z" level=info msg="shim disconnected" id=17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771 namespace=k8s.io May 27 03:03:34.016177 containerd[1530]: time="2025-05-27T03:03:34.015886579Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/5f3d517c9d374fc51926c9d084db59d274419b8ce56b5f3f27c58a67a3f310d5->@: write: broken pipe" runtime=io.containerd.runc.v2 May 27 03:03:34.022456 containerd[1530]: time="2025-05-27T03:03:34.015866978Z" level=warning msg="cleaning up after shim disconnected" id=17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771 namespace=k8s.io May 27 03:03:34.022456 containerd[1530]: time="2025-05-27T03:03:34.022450885Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:03:34.022573 containerd[1530]: time="2025-05-27T03:03:34.022404483Z" level=info msg="StopContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" returns successfully" May 27 03:03:34.023190 containerd[1530]: time="2025-05-27T03:03:34.023158478Z" level=info msg="StopPodSandbox for \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\"" May 27 03:03:34.023267 containerd[1530]: time="2025-05-27T03:03:34.023246122Z" level=info msg="Container to stop \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:34.023267 containerd[1530]: time="2025-05-27T03:03:34.023263563Z" level=info msg="Container to stop \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:34.023326 containerd[1530]: time="2025-05-27T03:03:34.023273923Z" level=info msg="Container to stop \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:34.023326 containerd[1530]: time="2025-05-27T03:03:34.023290364Z" level=info msg="Container to stop \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:34.023326 containerd[1530]: time="2025-05-27T03:03:34.023298884Z" level=info msg="Container to stop \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:03:34.031283 systemd[1]: cri-containerd-ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad.scope: Deactivated successfully. May 27 03:03:34.046645 containerd[1530]: time="2025-05-27T03:03:34.046568049Z" level=error msg="Failed to handle event container_id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" pid:2878 exit_status:137 exited_at:{seconds:1748315013 nanos:977067944} for 17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" May 27 03:03:34.046645 containerd[1530]: time="2025-05-27T03:03:34.046628491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" id:\"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" pid:2786 exit_status:137 exited_at:{seconds:1748315014 nanos:32956854}" May 27 03:03:34.046893 containerd[1530]: time="2025-05-27T03:03:34.046851582Z" level=info msg="received exit event sandbox_id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" exit_status:137 exited_at:{seconds:1748315013 nanos:977067944}" May 27 03:03:34.052060 containerd[1530]: time="2025-05-27T03:03:34.052001942Z" level=info msg="TearDown network for sandbox \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" successfully" May 27 03:03:34.052060 containerd[1530]: time="2025-05-27T03:03:34.052037143Z" level=info msg="StopPodSandbox for \"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" returns successfully" May 27 03:03:34.053328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771-shm.mount: Deactivated successfully. May 27 03:03:34.059576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad-rootfs.mount: Deactivated successfully. May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.087628442Z" level=info msg="received exit event sandbox_id:\"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" exit_status:137 exited_at:{seconds:1748315014 nanos:32956854}" May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.088468081Z" level=info msg="TearDown network for sandbox \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" successfully" May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.088487962Z" level=info msg="StopPodSandbox for \"ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad\" returns successfully" May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.088192308Z" level=info msg="shim disconnected" id=ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad namespace=k8s.io May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.088545205Z" level=warning msg="cleaning up after shim disconnected" id=ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad namespace=k8s.io May 27 03:03:34.088796 containerd[1530]: time="2025-05-27T03:03:34.088575486Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253382 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-kube-api-access-mk54c\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253428 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-run\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253450 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40b02fd-5261-4846-8b54-804951593ddf-cilium-config-path\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253465 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-kernel\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253491 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-net\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254262 kubelet[2639]: I0527 03:03:34.253507 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-etc-cni-netd\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253526 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40b02fd-5261-4846-8b54-804951593ddf-clustermesh-secrets\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253541 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-lib-modules\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253561 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-xtables-lock\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253576 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-hostproc\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253591 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-cgroup\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254657 kubelet[2639]: I0527 03:03:34.253610 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8eba849-6659-4982-bd1f-1c8e39974902-cilium-config-path\") pod \"c8eba849-6659-4982-bd1f-1c8e39974902\" (UID: \"c8eba849-6659-4982-bd1f-1c8e39974902\") " May 27 03:03:34.254778 kubelet[2639]: I0527 03:03:34.253636 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-hubble-tls\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254778 kubelet[2639]: I0527 03:03:34.253651 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-bpf-maps\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254778 kubelet[2639]: I0527 03:03:34.253665 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cni-path\") pod \"f40b02fd-5261-4846-8b54-804951593ddf\" (UID: \"f40b02fd-5261-4846-8b54-804951593ddf\") " May 27 03:03:34.254778 kubelet[2639]: I0527 03:03:34.253682 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-686v2\" (UniqueName: \"kubernetes.io/projected/c8eba849-6659-4982-bd1f-1c8e39974902-kube-api-access-686v2\") pod \"c8eba849-6659-4982-bd1f-1c8e39974902\" (UID: \"c8eba849-6659-4982-bd1f-1c8e39974902\") " May 27 03:03:34.259854 kubelet[2639]: I0527 03:03:34.259402 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.259854 kubelet[2639]: I0527 03:03:34.259702 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.259854 kubelet[2639]: I0527 03:03:34.259748 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-hostproc" (OuterVolumeSpecName: "hostproc") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.259854 kubelet[2639]: I0527 03:03:34.259764 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.260088 kubelet[2639]: I0527 03:03:34.260058 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.260470 kubelet[2639]: I0527 03:03:34.260420 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.260530 kubelet[2639]: I0527 03:03:34.260475 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.260530 kubelet[2639]: I0527 03:03:34.260491 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.261405 kubelet[2639]: I0527 03:03:34.261360 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8eba849-6659-4982-bd1f-1c8e39974902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8eba849-6659-4982-bd1f-1c8e39974902" (UID: "c8eba849-6659-4982-bd1f-1c8e39974902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:03:34.261476 kubelet[2639]: I0527 03:03:34.261413 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cni-path" (OuterVolumeSpecName: "cni-path") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.262229 kubelet[2639]: I0527 03:03:34.262140 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:03:34.262531 kubelet[2639]: I0527 03:03:34.262487 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8eba849-6659-4982-bd1f-1c8e39974902-kube-api-access-686v2" (OuterVolumeSpecName: "kube-api-access-686v2") pod "c8eba849-6659-4982-bd1f-1c8e39974902" (UID: "c8eba849-6659-4982-bd1f-1c8e39974902"). InnerVolumeSpecName "kube-api-access-686v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:03:34.263168 kubelet[2639]: I0527 03:03:34.263139 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40b02fd-5261-4846-8b54-804951593ddf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:03:34.263394 kubelet[2639]: I0527 03:03:34.263371 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b02fd-5261-4846-8b54-804951593ddf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:03:34.263485 kubelet[2639]: I0527 03:03:34.263465 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-kube-api-access-mk54c" (OuterVolumeSpecName: "kube-api-access-mk54c") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "kube-api-access-mk54c". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:03:34.263549 kubelet[2639]: I0527 03:03:34.263536 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f40b02fd-5261-4846-8b54-804951593ddf" (UID: "f40b02fd-5261-4846-8b54-804951593ddf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:03:34.354836 kubelet[2639]: I0527 03:03:34.354793 2639 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354836 kubelet[2639]: I0527 03:03:34.354841 2639 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354851 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-686v2\" (UniqueName: \"kubernetes.io/projected/c8eba849-6659-4982-bd1f-1c8e39974902-kube-api-access-686v2\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354862 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mk54c\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-kube-api-access-mk54c\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354870 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354877 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40b02fd-5261-4846-8b54-804951593ddf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354886 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354893 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354901 2639 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.354967 kubelet[2639]: I0527 03:03:34.354908 2639 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40b02fd-5261-4846-8b54-804951593ddf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354917 2639 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354927 2639 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354943 2639 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354952 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40b02fd-5261-4846-8b54-804951593ddf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354960 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8eba849-6659-4982-bd1f-1c8e39974902-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.355131 kubelet[2639]: I0527 03:03:34.354967 2639 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40b02fd-5261-4846-8b54-804951593ddf-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 03:03:34.412789 kubelet[2639]: I0527 03:03:34.412761 2639 scope.go:117] "RemoveContainer" containerID="b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e" May 27 03:03:34.415318 containerd[1530]: time="2025-05-27T03:03:34.415274230Z" level=info msg="RemoveContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\"" May 27 03:03:34.420335 systemd[1]: Removed slice kubepods-besteffort-podc8eba849_6659_4982_bd1f_1c8e39974902.slice - libcontainer container kubepods-besteffort-podc8eba849_6659_4982_bd1f_1c8e39974902.slice. May 27 03:03:34.425912 systemd[1]: Removed slice kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice - libcontainer container kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice. May 27 03:03:34.426016 systemd[1]: kubepods-burstable-podf40b02fd_5261_4846_8b54_804951593ddf.slice: Consumed 6.494s CPU time, 122.2M memory peak, 160K read from disk, 12.9M written to disk. May 27 03:03:34.428876 containerd[1530]: time="2025-05-27T03:03:34.428841782Z" level=info msg="RemoveContainer for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" returns successfully" May 27 03:03:34.429178 kubelet[2639]: I0527 03:03:34.429100 2639 scope.go:117] "RemoveContainer" containerID="b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e" May 27 03:03:34.429370 containerd[1530]: time="2025-05-27T03:03:34.429331485Z" level=error msg="ContainerStatus for \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\": not found" May 27 03:03:34.433810 kubelet[2639]: E0527 03:03:34.433506 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\": not found" containerID="b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e" May 27 03:03:34.439991 kubelet[2639]: I0527 03:03:34.439864 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e"} err="failed to get container status \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b106e22a53e9a8d5822c52e74859511577c4eec37a0d65e3fe23720101748a6e\": not found" May 27 03:03:34.439991 kubelet[2639]: I0527 03:03:34.439983 2639 scope.go:117] "RemoveContainer" containerID="8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c" May 27 03:03:34.442590 containerd[1530]: time="2025-05-27T03:03:34.442385693Z" level=info msg="RemoveContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\"" May 27 03:03:34.446053 containerd[1530]: time="2025-05-27T03:03:34.446015702Z" level=info msg="RemoveContainer for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" returns successfully" May 27 03:03:34.446238 kubelet[2639]: I0527 03:03:34.446200 2639 scope.go:117] "RemoveContainer" containerID="0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727" May 27 03:03:34.447581 containerd[1530]: time="2025-05-27T03:03:34.447556934Z" level=info msg="RemoveContainer for \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\"" May 27 03:03:34.450521 containerd[1530]: time="2025-05-27T03:03:34.450489311Z" level=info msg="RemoveContainer for \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" returns successfully" May 27 03:03:34.450700 kubelet[2639]: I0527 03:03:34.450660 2639 scope.go:117] "RemoveContainer" containerID="076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3" May 27 03:03:34.452844 containerd[1530]: time="2025-05-27T03:03:34.452629371Z" level=info msg="RemoveContainer for \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\"" May 27 03:03:34.455626 containerd[1530]: time="2025-05-27T03:03:34.455595269Z" level=info msg="RemoveContainer for \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" returns successfully" May 27 03:03:34.455818 kubelet[2639]: I0527 03:03:34.455767 2639 scope.go:117] "RemoveContainer" containerID="674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0" May 27 03:03:34.457088 containerd[1530]: time="2025-05-27T03:03:34.457063097Z" level=info msg="RemoveContainer for \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\"" May 27 03:03:34.459537 containerd[1530]: time="2025-05-27T03:03:34.459511971Z" level=info msg="RemoveContainer for \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" returns successfully" May 27 03:03:34.459690 kubelet[2639]: I0527 03:03:34.459654 2639 scope.go:117] "RemoveContainer" containerID="973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825" May 27 03:03:34.460919 containerd[1530]: time="2025-05-27T03:03:34.460894316Z" level=info msg="RemoveContainer for \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\"" May 27 03:03:34.463190 containerd[1530]: time="2025-05-27T03:03:34.463161621Z" level=info msg="RemoveContainer for \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" returns successfully" May 27 03:03:34.463319 kubelet[2639]: I0527 03:03:34.463292 2639 scope.go:117] "RemoveContainer" containerID="8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c" May 27 03:03:34.463500 containerd[1530]: time="2025-05-27T03:03:34.463468916Z" level=error msg="ContainerStatus for \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\": not found" May 27 03:03:34.463682 kubelet[2639]: E0527 03:03:34.463643 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\": not found" containerID="8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c" May 27 03:03:34.463682 kubelet[2639]: I0527 03:03:34.463670 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c"} err="failed to get container status \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cea0c6a2fecb0a599bb21f1f10505e17469cc42e758a9044da6b5ec58a0c49c\": not found" May 27 03:03:34.463749 kubelet[2639]: I0527 03:03:34.463689 2639 scope.go:117] "RemoveContainer" containerID="0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727" May 27 03:03:34.463874 containerd[1530]: time="2025-05-27T03:03:34.463840613Z" level=error msg="ContainerStatus for \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\": not found" May 27 03:03:34.463968 kubelet[2639]: E0527 03:03:34.463948 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\": not found" containerID="0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727" May 27 03:03:34.464000 kubelet[2639]: I0527 03:03:34.463969 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727"} err="failed to get container status \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cc3e3ef15dbec425c333ad5c83e65e1c266ed480391a0316a6a0b7e53acb727\": not found" May 27 03:03:34.464023 kubelet[2639]: I0527 03:03:34.464001 2639 scope.go:117] "RemoveContainer" containerID="076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3" May 27 03:03:34.464170 containerd[1530]: time="2025-05-27T03:03:34.464141347Z" level=error msg="ContainerStatus for \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\": not found" May 27 03:03:34.464247 kubelet[2639]: E0527 03:03:34.464228 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\": not found" containerID="076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3" May 27 03:03:34.464284 kubelet[2639]: I0527 03:03:34.464250 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3"} err="failed to get container status \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"076c231b40e0a7d2f816828ee8a0b360bb29b87c618d60ce449c501ae31dcbf3\": not found" May 27 03:03:34.464284 kubelet[2639]: I0527 03:03:34.464263 2639 scope.go:117] "RemoveContainer" containerID="674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0" May 27 03:03:34.464398 containerd[1530]: time="2025-05-27T03:03:34.464374038Z" level=error msg="ContainerStatus for \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\": not found" May 27 03:03:34.464496 kubelet[2639]: E0527 03:03:34.464477 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\": not found" containerID="674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0" May 27 03:03:34.464532 kubelet[2639]: I0527 03:03:34.464499 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0"} err="failed to get container status \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"674e2d6390e6ed3de89bea5fa3dabc7986bbbc81c04cc21931bfc8314adb74b0\": not found" May 27 03:03:34.464532 kubelet[2639]: I0527 03:03:34.464511 2639 scope.go:117] "RemoveContainer" containerID="973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825" May 27 03:03:34.464636 containerd[1530]: time="2025-05-27T03:03:34.464613129Z" level=error msg="ContainerStatus for \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\": not found" May 27 03:03:34.464721 kubelet[2639]: E0527 03:03:34.464703 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\": not found" containerID="973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825" May 27 03:03:34.464771 kubelet[2639]: I0527 03:03:34.464723 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825"} err="failed to get container status \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\": rpc error: code = NotFound desc = an error occurred when try to find container \"973c23206af9dffb5f7350b110dee60145957f83dc9d176d96aa119c8d43a825\": not found" May 27 03:03:34.940124 systemd[1]: var-lib-kubelet-pods-c8eba849\x2d6659\x2d4982\x2dbd1f\x2d1c8e39974902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d686v2.mount: Deactivated successfully. May 27 03:03:34.940220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecc2483fc9233a35cb79dbf4503ca7a55f9202e33b271cca36e63600d40976ad-shm.mount: Deactivated successfully. May 27 03:03:34.940285 systemd[1]: var-lib-kubelet-pods-f40b02fd\x2d5261\x2d4846\x2d8b54\x2d804951593ddf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmk54c.mount: Deactivated successfully. May 27 03:03:34.940334 systemd[1]: var-lib-kubelet-pods-f40b02fd\x2d5261\x2d4846\x2d8b54\x2d804951593ddf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 03:03:34.940394 systemd[1]: var-lib-kubelet-pods-f40b02fd\x2d5261\x2d4846\x2d8b54\x2d804951593ddf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 03:03:35.306455 containerd[1530]: time="2025-05-27T03:03:35.306345353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" id:\"17d15a453f4eeda53ebf6742fac748d3220335a7fbc9a7c0c89c46d53e864771\" pid:2878 exit_status:137 exited_at:{seconds:1748315013 nanos:977067944}" May 27 03:03:35.871561 sshd[4209]: Connection closed by 10.0.0.1 port 47622 May 27 03:03:35.871928 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 27 03:03:35.891052 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:47622.service: Deactivated successfully. May 27 03:03:35.892549 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:03:35.892725 systemd[1]: session-22.scope: Consumed 1.533s CPU time, 25.5M memory peak. May 27 03:03:35.893339 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. May 27 03:03:35.896503 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:45554.service - OpenSSH per-connection server daemon (10.0.0.1:45554). May 27 03:03:35.897122 systemd-logind[1516]: Removed session 22. May 27 03:03:35.949047 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 45554 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:35.950134 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:35.953795 systemd-logind[1516]: New session 23 of user core. May 27 03:03:35.963021 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:03:36.205912 kubelet[2639]: I0527 03:03:36.205352 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8eba849-6659-4982-bd1f-1c8e39974902" path="/var/lib/kubelet/pods/c8eba849-6659-4982-bd1f-1c8e39974902/volumes" May 27 03:03:36.205912 kubelet[2639]: I0527 03:03:36.205690 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40b02fd-5261-4846-8b54-804951593ddf" path="/var/lib/kubelet/pods/f40b02fd-5261-4846-8b54-804951593ddf/volumes" May 27 03:03:36.887741 sshd[4363]: Connection closed by 10.0.0.1 port 45554 May 27 03:03:36.888276 sshd-session[4361]: pam_unix(sshd:session): session closed for user core May 27 03:03:36.898329 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:45554.service: Deactivated successfully. May 27 03:03:36.900607 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:03:36.901918 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. May 27 03:03:36.904647 systemd-logind[1516]: Removed session 23. May 27 03:03:36.909166 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:45564.service - OpenSSH per-connection server daemon (10.0.0.1:45564). May 27 03:03:36.935857 kubelet[2639]: I0527 03:03:36.933693 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="c8eba849-6659-4982-bd1f-1c8e39974902" containerName="cilium-operator" May 27 03:03:36.935857 kubelet[2639]: I0527 03:03:36.933934 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="f40b02fd-5261-4846-8b54-804951593ddf" containerName="cilium-agent" May 27 03:03:36.947369 systemd[1]: Created slice kubepods-burstable-podc36b51b0_e1ee_4fc5_8e37_f95548a68373.slice - libcontainer container kubepods-burstable-podc36b51b0_e1ee_4fc5_8e37_f95548a68373.slice. May 27 03:03:36.975758 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:36.977130 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:36.981878 systemd-logind[1516]: New session 24 of user core. May 27 03:03:36.996969 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:03:37.047721 sshd[4377]: Connection closed by 10.0.0.1 port 45564 May 27 03:03:37.048026 sshd-session[4375]: pam_unix(sshd:session): session closed for user core May 27 03:03:37.063755 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:45564.service: Deactivated successfully. May 27 03:03:37.065401 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:03:37.066168 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. May 27 03:03:37.068668 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:45572.service - OpenSSH per-connection server daemon (10.0.0.1:45572). May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069205 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c36b51b0-e1ee-4fc5-8e37-f95548a68373-clustermesh-secrets\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069243 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c36b51b0-e1ee-4fc5-8e37-f95548a68373-cilium-config-path\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069260 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9x7n\" (UniqueName: \"kubernetes.io/projected/c36b51b0-e1ee-4fc5-8e37-f95548a68373-kube-api-access-s9x7n\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069279 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-bpf-maps\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069295 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-cilium-run\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.069995 kubelet[2639]: I0527 03:03:37.069310 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-hostproc\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069325 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c36b51b0-e1ee-4fc5-8e37-f95548a68373-cilium-ipsec-secrets\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069342 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-cni-path\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069358 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-etc-cni-netd\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069374 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-lib-modules\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069391 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-host-proc-sys-kernel\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070245 kubelet[2639]: I0527 03:03:37.069408 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-xtables-lock\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070066 systemd-logind[1516]: Removed session 24. May 27 03:03:37.070395 kubelet[2639]: I0527 03:03:37.069422 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c36b51b0-e1ee-4fc5-8e37-f95548a68373-hubble-tls\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070395 kubelet[2639]: I0527 03:03:37.069437 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-cilium-cgroup\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.070395 kubelet[2639]: I0527 03:03:37.069452 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c36b51b0-e1ee-4fc5-8e37-f95548a68373-host-proc-sys-net\") pod \"cilium-q7zb4\" (UID: \"c36b51b0-e1ee-4fc5-8e37-f95548a68373\") " pod="kube-system/cilium-q7zb4" May 27 03:03:37.119936 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 45572 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 03:03:37.122416 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:03:37.126309 systemd-logind[1516]: New session 25 of user core. May 27 03:03:37.137984 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:03:37.253610 containerd[1530]: time="2025-05-27T03:03:37.253537695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7zb4,Uid:c36b51b0-e1ee-4fc5-8e37-f95548a68373,Namespace:kube-system,Attempt:0,}" May 27 03:03:37.266046 containerd[1530]: time="2025-05-27T03:03:37.265998638Z" level=info msg="connecting to shim 31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:03:37.279719 kubelet[2639]: E0527 03:03:37.279457 2639 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:03:37.289995 systemd[1]: Started cri-containerd-31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a.scope - libcontainer container 31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a. May 27 03:03:37.312086 containerd[1530]: time="2025-05-27T03:03:37.312040843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7zb4,Uid:c36b51b0-e1ee-4fc5-8e37-f95548a68373,Namespace:kube-system,Attempt:0,} returns sandbox id \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\"" May 27 03:03:37.315290 containerd[1530]: time="2025-05-27T03:03:37.315257263Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:03:37.321344 containerd[1530]: time="2025-05-27T03:03:37.321301647Z" level=info msg="Container 99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40: CDI devices from CRI Config.CDIDevices: []" May 27 03:03:37.331916 containerd[1530]: time="2025-05-27T03:03:37.331880427Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\"" May 27 03:03:37.333172 containerd[1530]: time="2025-05-27T03:03:37.333150683Z" level=info msg="StartContainer for \"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\"" May 27 03:03:37.338745 containerd[1530]: time="2025-05-27T03:03:37.338709885Z" level=info msg="connecting to shim 99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" protocol=ttrpc version=3 May 27 03:03:37.362977 systemd[1]: Started cri-containerd-99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40.scope - libcontainer container 99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40. May 27 03:03:37.385347 containerd[1530]: time="2025-05-27T03:03:37.385317955Z" level=info msg="StartContainer for \"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\" returns successfully" May 27 03:03:37.399903 systemd[1]: cri-containerd-99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40.scope: Deactivated successfully. May 27 03:03:37.407717 containerd[1530]: time="2025-05-27T03:03:37.407659288Z" level=info msg="received exit event container_id:\"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\" id:\"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\" pid:4456 exited_at:{seconds:1748315017 nanos:407437438}" May 27 03:03:37.408098 containerd[1530]: time="2025-05-27T03:03:37.407891538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\" id:\"99be1d6bebd424390931cfe39567499cceed57d1a80f87b23a5e7e712e5cff40\" pid:4456 exited_at:{seconds:1748315017 nanos:407437438}" May 27 03:03:38.438076 containerd[1530]: time="2025-05-27T03:03:38.437628256Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:03:38.449917 containerd[1530]: time="2025-05-27T03:03:38.449880738Z" level=info msg="Container 1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99: CDI devices from CRI Config.CDIDevices: []" May 27 03:03:38.455977 containerd[1530]: time="2025-05-27T03:03:38.455884633Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\"" May 27 03:03:38.456401 containerd[1530]: time="2025-05-27T03:03:38.456381055Z" level=info msg="StartContainer for \"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\"" May 27 03:03:38.457364 containerd[1530]: time="2025-05-27T03:03:38.457296254Z" level=info msg="connecting to shim 1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" protocol=ttrpc version=3 May 27 03:03:38.479973 systemd[1]: Started cri-containerd-1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99.scope - libcontainer container 1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99. May 27 03:03:38.504125 containerd[1530]: time="2025-05-27T03:03:38.504024404Z" level=info msg="StartContainer for \"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\" returns successfully" May 27 03:03:38.511546 systemd[1]: cri-containerd-1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99.scope: Deactivated successfully. May 27 03:03:38.512706 containerd[1530]: time="2025-05-27T03:03:38.512678693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\" id:\"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\" pid:4503 exited_at:{seconds:1748315018 nanos:512451404}" May 27 03:03:38.512760 containerd[1530]: time="2025-05-27T03:03:38.512747976Z" level=info msg="received exit event container_id:\"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\" id:\"1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99\" pid:4503 exited_at:{seconds:1748315018 nanos:512451404}" May 27 03:03:39.173635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1216019cbed8ee2462e8f20c83cf79e94cf22d08c14ab7e03575485d02343f99-rootfs.mount: Deactivated successfully. May 27 03:03:39.440252 containerd[1530]: time="2025-05-27T03:03:39.439950796Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:03:39.449281 containerd[1530]: time="2025-05-27T03:03:39.449248944Z" level=info msg="Container 23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6: CDI devices from CRI Config.CDIDevices: []" May 27 03:03:39.456673 containerd[1530]: time="2025-05-27T03:03:39.456572969Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\"" May 27 03:03:39.457126 containerd[1530]: time="2025-05-27T03:03:39.457098631Z" level=info msg="StartContainer for \"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\"" May 27 03:03:39.458459 containerd[1530]: time="2025-05-27T03:03:39.458383485Z" level=info msg="connecting to shim 23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" protocol=ttrpc version=3 May 27 03:03:39.481003 systemd[1]: Started cri-containerd-23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6.scope - libcontainer container 23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6. May 27 03:03:39.511022 systemd[1]: cri-containerd-23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6.scope: Deactivated successfully. May 27 03:03:39.511716 containerd[1530]: time="2025-05-27T03:03:39.511391094Z" level=info msg="StartContainer for \"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\" returns successfully" May 27 03:03:39.511887 containerd[1530]: time="2025-05-27T03:03:39.511862194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\" id:\"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\" pid:4547 exited_at:{seconds:1748315019 nanos:511558061}" May 27 03:03:39.512593 containerd[1530]: time="2025-05-27T03:03:39.512062482Z" level=info msg="received exit event container_id:\"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\" id:\"23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6\" pid:4547 exited_at:{seconds:1748315019 nanos:511558061}" May 27 03:03:40.173818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23c7154942e9e8ba474ba119e9ba1eef3885b2941722c9bb8cc5d5b1dce498b6-rootfs.mount: Deactivated successfully. May 27 03:03:40.444753 containerd[1530]: time="2025-05-27T03:03:40.444552198Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:03:40.454963 containerd[1530]: time="2025-05-27T03:03:40.454920821Z" level=info msg="Container f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630: CDI devices from CRI Config.CDIDevices: []" May 27 03:03:40.455309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475739391.mount: Deactivated successfully. May 27 03:03:40.462021 containerd[1530]: time="2025-05-27T03:03:40.461980389Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\"" May 27 03:03:40.462758 containerd[1530]: time="2025-05-27T03:03:40.462704618Z" level=info msg="StartContainer for \"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\"" May 27 03:03:40.463783 containerd[1530]: time="2025-05-27T03:03:40.463719060Z" level=info msg="connecting to shim f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" protocol=ttrpc version=3 May 27 03:03:40.483975 systemd[1]: Started cri-containerd-f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630.scope - libcontainer container f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630. May 27 03:03:40.507876 systemd[1]: cri-containerd-f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630.scope: Deactivated successfully. May 27 03:03:40.508769 containerd[1530]: time="2025-05-27T03:03:40.508736896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\" id:\"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\" pid:4585 exited_at:{seconds:1748315020 nanos:508292518}" May 27 03:03:40.511298 containerd[1530]: time="2025-05-27T03:03:40.511261479Z" level=info msg="received exit event container_id:\"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\" id:\"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\" pid:4585 exited_at:{seconds:1748315020 nanos:508292518}" May 27 03:03:40.511363 containerd[1530]: time="2025-05-27T03:03:40.511260279Z" level=info msg="StartContainer for \"f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630\" returns successfully" May 27 03:03:40.528023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f73458e3bdb329dbe4fd89f52e18241ed955790ac1336cd21c6df6387c3cf630-rootfs.mount: Deactivated successfully. May 27 03:03:41.448784 containerd[1530]: time="2025-05-27T03:03:41.448742974Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:03:41.486913 containerd[1530]: time="2025-05-27T03:03:41.486852096Z" level=info msg="Container a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b: CDI devices from CRI Config.CDIDevices: []" May 27 03:03:41.493597 containerd[1530]: time="2025-05-27T03:03:41.493437639Z" level=info msg="CreateContainer within sandbox \"31314463ccf8db54a30bccdb0f3642178ec4ae27827b7b914cdacfdbc7800d4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\"" May 27 03:03:41.494352 containerd[1530]: time="2025-05-27T03:03:41.494172788Z" level=info msg="StartContainer for \"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\"" May 27 03:03:41.495033 containerd[1530]: time="2025-05-27T03:03:41.495003741Z" level=info msg="connecting to shim a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b" address="unix:///run/containerd/s/bc6e1170170a4a23df75f55ef1b86a142fd6e0298e389e9089c6bfb4b8bcba3e" protocol=ttrpc version=3 May 27 03:03:41.524025 systemd[1]: Started cri-containerd-a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b.scope - libcontainer container a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b. May 27 03:03:41.554146 containerd[1530]: time="2025-05-27T03:03:41.554099621Z" level=info msg="StartContainer for \"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" returns successfully" May 27 03:03:41.604299 containerd[1530]: time="2025-05-27T03:03:41.604255343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" id:\"b8505e1f45e9e35002d74e24dda8c252ec88ba02d621feb839deeb2046f67aa8\" pid:4652 exited_at:{seconds:1748315021 nanos:603921450}" May 27 03:03:41.811845 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 03:03:43.567886 containerd[1530]: time="2025-05-27T03:03:43.567813692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" id:\"8dd496707f15bd0e073167194e1989625ca351fa7db4d0f80146fad8b52386c9\" pid:4816 exit_status:1 exited_at:{seconds:1748315023 nanos:567504400}" May 27 03:03:44.750083 systemd-networkd[1440]: lxc_health: Link UP May 27 03:03:44.761257 systemd-networkd[1440]: lxc_health: Gained carrier May 27 03:03:45.293527 kubelet[2639]: I0527 03:03:45.293456 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q7zb4" podStartSLOduration=9.29344151 podStartE2EDuration="9.29344151s" podCreationTimestamp="2025-05-27 03:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:03:42.468064202 +0000 UTC m=+80.353870392" watchObservedRunningTime="2025-05-27 03:03:45.29344151 +0000 UTC m=+83.179247660" May 27 03:03:45.738742 containerd[1530]: time="2025-05-27T03:03:45.738697784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" id:\"d6a17f49557cf2150aa4281a45e746d5be914cf32070f8d101f9dde2df0393ca\" pid:5183 exited_at:{seconds:1748315025 nanos:737981158}" May 27 03:03:46.684008 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 27 03:03:47.851227 containerd[1530]: time="2025-05-27T03:03:47.851182807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" id:\"8917aa0309a7823294c519fccb2b1f90ed8d440779d9026ffa02bf9de78a8880\" pid:5215 exited_at:{seconds:1748315027 nanos:850741071}" May 27 03:03:49.987161 containerd[1530]: time="2025-05-27T03:03:49.987121204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a003c1ae09aefbcc2b40abd0d1f5820de23408930fc9ab73bc4d106700c2187b\" id:\"a02f21d67be2b98a473c86cd742902490c5036628f1fafb01368b6278b96a71f\" pid:5246 exited_at:{seconds:1748315029 nanos:986792393}" May 27 03:03:50.002784 sshd[4386]: Connection closed by 10.0.0.1 port 45572 May 27 03:03:50.003233 sshd-session[4384]: pam_unix(sshd:session): session closed for user core May 27 03:03:50.007042 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. May 27 03:03:50.007185 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:45572.service: Deactivated successfully. May 27 03:03:50.008817 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:03:50.011325 systemd-logind[1516]: Removed session 25.