May 27 02:45:34.805062 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 02:45:34.805083 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 02:45:34.805092 kernel: KASLR enabled May 27 02:45:34.805098 kernel: efi: EFI v2.7 by EDK II May 27 02:45:34.805103 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 27 02:45:34.805108 kernel: random: crng init done May 27 02:45:34.805115 kernel: secureboot: Secure boot disabled May 27 02:45:34.805121 kernel: ACPI: Early table checksum verification disabled May 27 02:45:34.805127 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 27 02:45:34.805133 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 02:45:34.805139 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805145 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805150 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805156 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805163 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805171 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805177 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805183 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805189 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:45:34.805195 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 02:45:34.805201 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 02:45:34.805207 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:45:34.805213 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 27 02:45:34.805219 kernel: Zone ranges: May 27 02:45:34.805225 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:45:34.805232 kernel: DMA32 empty May 27 02:45:34.805238 kernel: Normal empty May 27 02:45:34.805244 kernel: Device empty May 27 02:45:34.805249 kernel: Movable zone start for each node May 27 02:45:34.805256 kernel: Early memory node ranges May 27 02:45:34.805262 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 27 02:45:34.805268 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 27 02:45:34.805273 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 27 02:45:34.805279 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 27 02:45:34.805285 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 27 02:45:34.805291 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 27 02:45:34.805297 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 27 02:45:34.805304 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 27 02:45:34.805310 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 27 02:45:34.805316 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 27 02:45:34.805325 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 27 02:45:34.805331 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 27 02:45:34.805338 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 02:45:34.805345 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:45:34.805352 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 02:45:34.805358 kernel: psci: probing for conduit method from ACPI. May 27 02:45:34.805364 kernel: psci: PSCIv1.1 detected in firmware. May 27 02:45:34.805371 kernel: psci: Using standard PSCI v0.2 function IDs May 27 02:45:34.805377 kernel: psci: Trusted OS migration not required May 27 02:45:34.805383 kernel: psci: SMC Calling Convention v1.1 May 27 02:45:34.805390 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 02:45:34.805396 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 02:45:34.805403 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 02:45:34.805411 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 02:45:34.805417 kernel: Detected PIPT I-cache on CPU0 May 27 02:45:34.805423 kernel: CPU features: detected: GIC system register CPU interface May 27 02:45:34.805430 kernel: CPU features: detected: Spectre-v4 May 27 02:45:34.805436 kernel: CPU features: detected: Spectre-BHB May 27 02:45:34.805442 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 02:45:34.805448 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 02:45:34.805455 kernel: CPU features: detected: ARM erratum 1418040 May 27 02:45:34.805461 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 02:45:34.805467 kernel: alternatives: applying boot alternatives May 27 02:45:34.805475 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:45:34.805483 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 02:45:34.805489 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 02:45:34.805496 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 02:45:34.805502 kernel: Fallback order for Node 0: 0 May 27 02:45:34.805509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 02:45:34.805515 kernel: Policy zone: DMA May 27 02:45:34.805521 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 02:45:34.805528 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 02:45:34.805534 kernel: software IO TLB: area num 4. May 27 02:45:34.805540 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 02:45:34.805547 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 27 02:45:34.805553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 02:45:34.805561 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 02:45:34.805567 kernel: rcu: RCU event tracing is enabled. May 27 02:45:34.805574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 02:45:34.805580 kernel: Trampoline variant of Tasks RCU enabled. May 27 02:45:34.805587 kernel: Tracing variant of Tasks RCU enabled. May 27 02:45:34.805594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 02:45:34.805600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 02:45:34.805607 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:45:34.805613 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:45:34.805620 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 02:45:34.805626 kernel: GICv3: 256 SPIs implemented May 27 02:45:34.805634 kernel: GICv3: 0 Extended SPIs implemented May 27 02:45:34.805641 kernel: Root IRQ handler: gic_handle_irq May 27 02:45:34.805647 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 02:45:34.805654 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 02:45:34.805668 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 02:45:34.805674 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 02:45:34.805681 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 02:45:34.805687 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 02:45:34.805694 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 02:45:34.805700 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 02:45:34.805707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 02:45:34.805713 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:45:34.805721 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 02:45:34.805728 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 02:45:34.805734 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 02:45:34.805741 kernel: arm-pv: using stolen time PV May 27 02:45:34.805748 kernel: Console: colour dummy device 80x25 May 27 02:45:34.805755 kernel: ACPI: Core revision 20240827 May 27 02:45:34.805762 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 02:45:34.805768 kernel: pid_max: default: 32768 minimum: 301 May 27 02:45:34.805774 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 02:45:34.805783 kernel: landlock: Up and running. May 27 02:45:34.805789 kernel: SELinux: Initializing. May 27 02:45:34.805796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:45:34.805802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:45:34.805810 kernel: rcu: Hierarchical SRCU implementation. May 27 02:45:34.805817 kernel: rcu: Max phase no-delay instances is 400. May 27 02:45:34.805823 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 02:45:34.805830 kernel: Remapping and enabling EFI services. May 27 02:45:34.805836 kernel: smp: Bringing up secondary CPUs ... May 27 02:45:34.805843 kernel: Detected PIPT I-cache on CPU1 May 27 02:45:34.805856 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 02:45:34.805863 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 02:45:34.805871 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:45:34.805878 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 02:45:34.805885 kernel: Detected PIPT I-cache on CPU2 May 27 02:45:34.805892 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 02:45:34.805899 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 02:45:34.805907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:45:34.805914 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 02:45:34.805921 kernel: Detected PIPT I-cache on CPU3 May 27 02:45:34.805928 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 02:45:34.805944 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 02:45:34.805952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:45:34.805958 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 02:45:34.805965 kernel: smp: Brought up 1 node, 4 CPUs May 27 02:45:34.805972 kernel: SMP: Total of 4 processors activated. May 27 02:45:34.805979 kernel: CPU: All CPU(s) started at EL1 May 27 02:45:34.805987 kernel: CPU features: detected: 32-bit EL0 Support May 27 02:45:34.805994 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 02:45:34.806002 kernel: CPU features: detected: Common not Private translations May 27 02:45:34.806009 kernel: CPU features: detected: CRC32 instructions May 27 02:45:34.806016 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 02:45:34.806023 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 02:45:34.806030 kernel: CPU features: detected: LSE atomic instructions May 27 02:45:34.806036 kernel: CPU features: detected: Privileged Access Never May 27 02:45:34.806043 kernel: CPU features: detected: RAS Extension Support May 27 02:45:34.806052 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 02:45:34.806059 kernel: alternatives: applying system-wide alternatives May 27 02:45:34.806066 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 02:45:34.806073 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 27 02:45:34.806080 kernel: devtmpfs: initialized May 27 02:45:34.806087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 02:45:34.806094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 02:45:34.806101 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 02:45:34.806107 kernel: 0 pages in range for non-PLT usage May 27 02:45:34.806115 kernel: 508544 pages in range for PLT usage May 27 02:45:34.806122 kernel: pinctrl core: initialized pinctrl subsystem May 27 02:45:34.806129 kernel: SMBIOS 3.0.0 present. May 27 02:45:34.806136 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 02:45:34.806143 kernel: DMI: Memory slots populated: 1/1 May 27 02:45:34.806150 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 02:45:34.806157 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 02:45:34.806164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 02:45:34.806171 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 02:45:34.806179 kernel: audit: initializing netlink subsys (disabled) May 27 02:45:34.806186 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 27 02:45:34.806193 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 02:45:34.806200 kernel: cpuidle: using governor menu May 27 02:45:34.806207 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 02:45:34.806214 kernel: ASID allocator initialised with 32768 entries May 27 02:45:34.806220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 02:45:34.806227 kernel: Serial: AMBA PL011 UART driver May 27 02:45:34.806234 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 02:45:34.806243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 02:45:34.806249 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 02:45:34.806256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 02:45:34.806263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 02:45:34.806270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 02:45:34.806277 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 02:45:34.806284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 02:45:34.806291 kernel: ACPI: Added _OSI(Module Device) May 27 02:45:34.806298 kernel: ACPI: Added _OSI(Processor Device) May 27 02:45:34.806306 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 02:45:34.806312 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 02:45:34.806319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 02:45:34.806326 kernel: ACPI: Interpreter enabled May 27 02:45:34.806333 kernel: ACPI: Using GIC for interrupt routing May 27 02:45:34.806340 kernel: ACPI: MCFG table detected, 1 entries May 27 02:45:34.806347 kernel: ACPI: CPU0 has been hot-added May 27 02:45:34.806353 kernel: ACPI: CPU1 has been hot-added May 27 02:45:34.806360 kernel: ACPI: CPU2 has been hot-added May 27 02:45:34.806367 kernel: ACPI: CPU3 has been hot-added May 27 02:45:34.806376 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 02:45:34.806383 kernel: printk: legacy console [ttyAMA0] enabled May 27 02:45:34.806390 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 02:45:34.806527 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 02:45:34.806597 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 02:45:34.806667 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 02:45:34.806735 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 02:45:34.806800 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 02:45:34.806809 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 02:45:34.806816 kernel: PCI host bridge to bus 0000:00 May 27 02:45:34.806883 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 02:45:34.806981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 02:45:34.807043 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 02:45:34.807098 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 02:45:34.807177 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 02:45:34.807249 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 02:45:34.807314 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 02:45:34.807376 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 02:45:34.807438 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 02:45:34.807499 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 02:45:34.807561 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 02:45:34.807625 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 02:45:34.807689 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 02:45:34.807746 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 02:45:34.807801 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 02:45:34.807810 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 02:45:34.807817 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 02:45:34.807824 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 02:45:34.807833 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 02:45:34.807840 kernel: iommu: Default domain type: Translated May 27 02:45:34.807847 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 02:45:34.807855 kernel: efivars: Registered efivars operations May 27 02:45:34.807861 kernel: vgaarb: loaded May 27 02:45:34.807869 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 02:45:34.807876 kernel: VFS: Disk quotas dquot_6.6.0 May 27 02:45:34.807883 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 02:45:34.807890 kernel: pnp: PnP ACPI init May 27 02:45:34.807976 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 02:45:34.807988 kernel: pnp: PnP ACPI: found 1 devices May 27 02:45:34.807995 kernel: NET: Registered PF_INET protocol family May 27 02:45:34.808002 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 02:45:34.808009 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 02:45:34.808016 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 02:45:34.808025 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 02:45:34.808031 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 02:45:34.808041 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 02:45:34.808048 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:45:34.808055 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:45:34.808062 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 02:45:34.808069 kernel: PCI: CLS 0 bytes, default 64 May 27 02:45:34.808075 kernel: kvm [1]: HYP mode not available May 27 02:45:34.808083 kernel: Initialise system trusted keyrings May 27 02:45:34.808090 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 02:45:34.808096 kernel: Key type asymmetric registered May 27 02:45:34.808104 kernel: Asymmetric key parser 'x509' registered May 27 02:45:34.808111 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 02:45:34.808118 kernel: io scheduler mq-deadline registered May 27 02:45:34.808125 kernel: io scheduler kyber registered May 27 02:45:34.808132 kernel: io scheduler bfq registered May 27 02:45:34.808139 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 02:45:34.808146 kernel: ACPI: button: Power Button [PWRB] May 27 02:45:34.808153 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 02:45:34.808218 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 02:45:34.808229 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 02:45:34.808235 kernel: thunder_xcv, ver 1.0 May 27 02:45:34.808242 kernel: thunder_bgx, ver 1.0 May 27 02:45:34.808249 kernel: nicpf, ver 1.0 May 27 02:45:34.808256 kernel: nicvf, ver 1.0 May 27 02:45:34.808327 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 02:45:34.808386 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T02:45:34 UTC (1748313934) May 27 02:45:34.808395 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 02:45:34.808404 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 02:45:34.808412 kernel: watchdog: NMI not fully supported May 27 02:45:34.808419 kernel: watchdog: Hard watchdog permanently disabled May 27 02:45:34.808426 kernel: NET: Registered PF_INET6 protocol family May 27 02:45:34.808432 kernel: Segment Routing with IPv6 May 27 02:45:34.808439 kernel: In-situ OAM (IOAM) with IPv6 May 27 02:45:34.808447 kernel: NET: Registered PF_PACKET protocol family May 27 02:45:34.808454 kernel: Key type dns_resolver registered May 27 02:45:34.808461 kernel: registered taskstats version 1 May 27 02:45:34.808468 kernel: Loading compiled-in X.509 certificates May 27 02:45:34.808476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 02:45:34.808483 kernel: Demotion targets for Node 0: null May 27 02:45:34.808490 kernel: Key type .fscrypt registered May 27 02:45:34.808497 kernel: Key type fscrypt-provisioning registered May 27 02:45:34.808504 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 02:45:34.808511 kernel: ima: Allocated hash algorithm: sha1 May 27 02:45:34.808517 kernel: ima: No architecture policies found May 27 02:45:34.808524 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 02:45:34.808533 kernel: clk: Disabling unused clocks May 27 02:45:34.808540 kernel: PM: genpd: Disabling unused power domains May 27 02:45:34.808546 kernel: Warning: unable to open an initial console. May 27 02:45:34.808554 kernel: Freeing unused kernel memory: 39424K May 27 02:45:34.808561 kernel: Run /init as init process May 27 02:45:34.808567 kernel: with arguments: May 27 02:45:34.808574 kernel: /init May 27 02:45:34.808581 kernel: with environment: May 27 02:45:34.808588 kernel: HOME=/ May 27 02:45:34.808596 kernel: TERM=linux May 27 02:45:34.808603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 02:45:34.808612 systemd[1]: Successfully made /usr/ read-only. May 27 02:45:34.808621 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:45:34.808629 systemd[1]: Detected virtualization kvm. May 27 02:45:34.808636 systemd[1]: Detected architecture arm64. May 27 02:45:34.808643 systemd[1]: Running in initrd. May 27 02:45:34.808650 systemd[1]: No hostname configured, using default hostname. May 27 02:45:34.808666 systemd[1]: Hostname set to . May 27 02:45:34.808673 systemd[1]: Initializing machine ID from VM UUID. May 27 02:45:34.808681 systemd[1]: Queued start job for default target initrd.target. May 27 02:45:34.808688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:45:34.808696 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:45:34.808704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 02:45:34.808712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:45:34.808719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 02:45:34.808729 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 02:45:34.808738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 02:45:34.808745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 02:45:34.808753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:45:34.808761 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:45:34.808768 systemd[1]: Reached target paths.target - Path Units. May 27 02:45:34.808777 systemd[1]: Reached target slices.target - Slice Units. May 27 02:45:34.808785 systemd[1]: Reached target swap.target - Swaps. May 27 02:45:34.808792 systemd[1]: Reached target timers.target - Timer Units. May 27 02:45:34.808799 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:45:34.808807 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:45:34.808814 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 02:45:34.808822 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 02:45:34.808829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:45:34.808837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:45:34.808845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:45:34.808853 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:45:34.808860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 02:45:34.808868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:45:34.808875 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 02:45:34.808883 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 02:45:34.808891 systemd[1]: Starting systemd-fsck-usr.service... May 27 02:45:34.808898 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:45:34.808907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:45:34.808915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:45:34.808922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:45:34.808930 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 02:45:34.808952 systemd[1]: Finished systemd-fsck-usr.service. May 27 02:45:34.808962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:45:34.808986 systemd-journald[244]: Collecting audit messages is disabled. May 27 02:45:34.809005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:45:34.809014 systemd-journald[244]: Journal started May 27 02:45:34.809034 systemd-journald[244]: Runtime Journal (/run/log/journal/c2c48e6077244b36b612ebfcff59e149) is 6M, max 48.5M, 42.4M free. May 27 02:45:34.813987 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 02:45:34.800603 systemd-modules-load[245]: Inserted module 'overlay' May 27 02:45:34.817702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 02:45:34.818531 systemd-modules-load[245]: Inserted module 'br_netfilter' May 27 02:45:34.819345 kernel: Bridge firewalling registered May 27 02:45:34.822219 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:45:34.826083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:45:34.827256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:45:34.831466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:45:34.832823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:45:34.838762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:45:34.846374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:45:34.847132 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 02:45:34.848185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:45:34.849115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:45:34.850984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:45:34.854506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 02:45:34.856742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:45:34.875435 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:45:34.891758 systemd-resolved[291]: Positive Trust Anchors: May 27 02:45:34.891775 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:45:34.891807 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:45:34.899140 systemd-resolved[291]: Defaulting to hostname 'linux'. May 27 02:45:34.900281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:45:34.901626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:45:34.959961 kernel: SCSI subsystem initialized May 27 02:45:34.966953 kernel: Loading iSCSI transport class v2.0-870. May 27 02:45:34.974960 kernel: iscsi: registered transport (tcp) May 27 02:45:34.987960 kernel: iscsi: registered transport (qla4xxx) May 27 02:45:34.987979 kernel: QLogic iSCSI HBA Driver May 27 02:45:35.005528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:45:35.019394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:45:35.021970 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:45:35.064888 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 02:45:35.067069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 02:45:35.132979 kernel: raid6: neonx8 gen() 15776 MB/s May 27 02:45:35.149961 kernel: raid6: neonx4 gen() 15799 MB/s May 27 02:45:35.166954 kernel: raid6: neonx2 gen() 13170 MB/s May 27 02:45:35.183958 kernel: raid6: neonx1 gen() 10513 MB/s May 27 02:45:35.200961 kernel: raid6: int64x8 gen() 6890 MB/s May 27 02:45:35.217956 kernel: raid6: int64x4 gen() 7344 MB/s May 27 02:45:35.234957 kernel: raid6: int64x2 gen() 6101 MB/s May 27 02:45:35.252121 kernel: raid6: int64x1 gen() 5052 MB/s May 27 02:45:35.252141 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s May 27 02:45:35.270168 kernel: raid6: .... xor() 12358 MB/s, rmw enabled May 27 02:45:35.270185 kernel: raid6: using neon recovery algorithm May 27 02:45:35.275961 kernel: xor: measuring software checksum speed May 27 02:45:35.275978 kernel: 8regs : 21641 MB/sec May 27 02:45:35.277209 kernel: 32regs : 18345 MB/sec May 27 02:45:35.277222 kernel: arm64_neon : 27927 MB/sec May 27 02:45:35.277231 kernel: xor: using function: arm64_neon (27927 MB/sec) May 27 02:45:35.334965 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 02:45:35.341335 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 02:45:35.343764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:45:35.370909 systemd-udevd[502]: Using default interface naming scheme 'v255'. May 27 02:45:35.375155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:45:35.377550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 02:45:35.409020 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation May 27 02:45:35.432982 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:45:35.435280 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:45:35.493113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:45:35.498239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 02:45:35.539922 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 02:45:35.544016 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 02:45:35.550423 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 02:45:35.550479 kernel: GPT:9289727 != 19775487 May 27 02:45:35.550490 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 02:45:35.550499 kernel: GPT:9289727 != 19775487 May 27 02:45:35.551586 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 02:45:35.551625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:45:35.556378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:45:35.556530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:45:35.559089 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:45:35.561196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:45:35.586150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 02:45:35.588232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:45:35.596976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 02:45:35.611870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 02:45:35.619034 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 02:45:35.619926 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 02:45:35.630008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:45:35.630904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:45:35.632407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:45:35.633880 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:45:35.636078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 02:45:35.637550 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 02:45:35.654089 disk-uuid[595]: Primary Header is updated. May 27 02:45:35.654089 disk-uuid[595]: Secondary Entries is updated. May 27 02:45:35.654089 disk-uuid[595]: Secondary Header is updated. May 27 02:45:35.657951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:45:35.660123 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 02:45:36.673968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:45:36.674258 disk-uuid[600]: The operation has completed successfully. May 27 02:45:36.695857 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 02:45:36.695979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 02:45:36.733755 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 02:45:36.766266 sh[614]: Success May 27 02:45:36.783675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 02:45:36.785571 kernel: device-mapper: uevent: version 1.0.3 May 27 02:45:36.785621 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 02:45:36.794224 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 02:45:36.820314 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 02:45:36.822901 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 02:45:36.839994 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 02:45:36.848872 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 02:45:36.848909 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (626) May 27 02:45:36.851968 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 02:45:36.852009 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 02:45:36.852020 kernel: BTRFS info (device dm-0): using free-space-tree May 27 02:45:36.862258 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 02:45:36.863077 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 02:45:36.864403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 02:45:36.865111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 02:45:36.867600 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 02:45:36.893414 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (657) May 27 02:45:36.893467 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:45:36.894533 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:45:36.894561 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:45:36.901950 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:45:36.903882 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 02:45:36.906551 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 02:45:36.967797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:45:36.972769 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:45:37.013365 systemd-networkd[801]: lo: Link UP May 27 02:45:37.013380 systemd-networkd[801]: lo: Gained carrier May 27 02:45:37.014126 systemd-networkd[801]: Enumeration completed May 27 02:45:37.014423 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:45:37.014597 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:45:37.014600 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:45:37.015120 systemd-networkd[801]: eth0: Link UP May 27 02:45:37.015123 systemd-networkd[801]: eth0: Gained carrier May 27 02:45:37.015130 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:45:37.015840 systemd[1]: Reached target network.target - Network. May 27 02:45:37.033019 ignition[712]: Ignition 2.21.0 May 27 02:45:37.033033 ignition[712]: Stage: fetch-offline May 27 02:45:37.033068 ignition[712]: no configs at "/usr/lib/ignition/base.d" May 27 02:45:37.033075 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:37.033261 ignition[712]: parsed url from cmdline: "" May 27 02:45:37.033264 ignition[712]: no config URL provided May 27 02:45:37.033268 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" May 27 02:45:37.033275 ignition[712]: no config at "/usr/lib/ignition/user.ign" May 27 02:45:37.033296 ignition[712]: op(1): [started] loading QEMU firmware config module May 27 02:45:37.033300 ignition[712]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 02:45:37.039367 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:45:37.044306 ignition[712]: op(1): [finished] loading QEMU firmware config module May 27 02:45:37.081273 ignition[712]: parsing config with SHA512: 0c2b1280e734ea0d397aac2b40131d5b755c6dbdb78b38152b00c7aced5ff1634b5c17a3d66f763ce6ade391d98239bacaafe2d66b7c59e8db07e689b85e1344 May 27 02:45:37.086776 unknown[712]: fetched base config from "system" May 27 02:45:37.086788 unknown[712]: fetched user config from "qemu" May 27 02:45:37.087209 ignition[712]: fetch-offline: fetch-offline passed May 27 02:45:37.087353 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.44 May 27 02:45:37.087265 ignition[712]: Ignition finished successfully May 27 02:45:37.087360 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. May 27 02:45:37.089196 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:45:37.091203 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 02:45:37.093108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 02:45:37.115603 ignition[813]: Ignition 2.21.0 May 27 02:45:37.115620 ignition[813]: Stage: kargs May 27 02:45:37.115775 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 27 02:45:37.115784 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:37.117640 ignition[813]: kargs: kargs passed May 27 02:45:37.117705 ignition[813]: Ignition finished successfully May 27 02:45:37.120866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 02:45:37.122782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 02:45:37.147802 ignition[821]: Ignition 2.21.0 May 27 02:45:37.147817 ignition[821]: Stage: disks May 27 02:45:37.147959 ignition[821]: no configs at "/usr/lib/ignition/base.d" May 27 02:45:37.147967 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:37.150155 ignition[821]: disks: disks passed May 27 02:45:37.150204 ignition[821]: Ignition finished successfully May 27 02:45:37.151683 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 02:45:37.153037 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 02:45:37.154159 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 02:45:37.155928 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:45:37.157777 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:45:37.159434 systemd[1]: Reached target basic.target - Basic System. May 27 02:45:37.161807 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 02:45:37.187276 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 02:45:37.195829 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 02:45:37.198042 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 02:45:37.277953 kernel: EXT4-fs (vda9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 02:45:37.278116 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 02:45:37.279230 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 02:45:37.281532 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:45:37.283178 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 02:45:37.284079 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 02:45:37.284124 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 02:45:37.284162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:45:37.301515 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 02:45:37.304077 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 02:45:37.308696 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (839) May 27 02:45:37.308724 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:45:37.308734 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:45:37.310558 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:45:37.314406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:45:37.349796 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory May 27 02:45:37.353717 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory May 27 02:45:37.358307 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory May 27 02:45:37.361596 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory May 27 02:45:37.520305 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 02:45:37.522434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 02:45:37.523878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 02:45:37.543965 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:45:37.567586 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 02:45:37.575953 ignition[952]: INFO : Ignition 2.21.0 May 27 02:45:37.575953 ignition[952]: INFO : Stage: mount May 27 02:45:37.575953 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:45:37.575953 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:37.579105 ignition[952]: INFO : mount: mount passed May 27 02:45:37.579105 ignition[952]: INFO : Ignition finished successfully May 27 02:45:37.578958 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 02:45:37.580881 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 02:45:37.848837 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 02:45:37.850429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:45:37.880304 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (965) May 27 02:45:37.880357 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:45:37.880369 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:45:37.881968 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:45:37.884977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:45:37.914258 ignition[982]: INFO : Ignition 2.21.0 May 27 02:45:37.914258 ignition[982]: INFO : Stage: files May 27 02:45:37.916579 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:45:37.916579 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:37.916579 ignition[982]: DEBUG : files: compiled without relabeling support, skipping May 27 02:45:37.919701 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 02:45:37.919701 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 02:45:37.919701 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 02:45:37.919701 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 02:45:37.924677 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 02:45:37.924677 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 02:45:37.924677 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 27 02:45:37.919788 unknown[982]: wrote ssh authorized keys file for user: core May 27 02:45:37.979948 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 02:45:38.225990 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 02:45:38.225990 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:45:38.228569 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 02:45:38.442076 systemd-networkd[801]: eth0: Gained IPv6LL May 27 02:45:38.508735 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 02:45:38.682502 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:45:38.682502 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:45:38.686106 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:45:38.701795 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:45:38.701795 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:45:38.701795 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 27 02:45:39.107519 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 02:45:39.619198 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:45:39.619198 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 02:45:39.622654 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 02:45:39.636829 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:45:39.640444 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:45:39.641746 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 02:45:39.641746 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 02:45:39.641746 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 02:45:39.641746 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 02:45:39.641746 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 02:45:39.641746 ignition[982]: INFO : files: files passed May 27 02:45:39.641746 ignition[982]: INFO : Ignition finished successfully May 27 02:45:39.642914 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 02:45:39.645248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 02:45:39.646912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 02:45:39.657096 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 02:45:39.657372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 02:45:39.660415 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory May 27 02:45:39.661722 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:45:39.661722 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 02:45:39.664628 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:45:39.665327 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:45:39.667169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 02:45:39.670849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 02:45:39.726467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 02:45:39.726605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 02:45:39.728614 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 02:45:39.730190 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 02:45:39.731877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 02:45:39.732795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 02:45:39.768713 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:45:39.771081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 02:45:39.799832 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 02:45:39.800894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:45:39.802706 systemd[1]: Stopped target timers.target - Timer Units. May 27 02:45:39.804281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 02:45:39.804421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:45:39.806722 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 02:45:39.808367 systemd[1]: Stopped target basic.target - Basic System. May 27 02:45:39.809662 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 02:45:39.810991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:45:39.812522 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 02:45:39.814066 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 02:45:39.815631 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 02:45:39.817163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:45:39.818740 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 02:45:39.820374 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 02:45:39.821734 systemd[1]: Stopped target swap.target - Swaps. May 27 02:45:39.823036 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 02:45:39.823175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 02:45:39.825241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 02:45:39.826774 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:45:39.828502 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 02:45:39.833009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:45:39.833973 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 02:45:39.834107 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 02:45:39.836589 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 02:45:39.836730 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:45:39.838503 systemd[1]: Stopped target paths.target - Path Units. May 27 02:45:39.839820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 02:45:39.843007 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:45:39.843992 systemd[1]: Stopped target slices.target - Slice Units. May 27 02:45:39.845760 systemd[1]: Stopped target sockets.target - Socket Units. May 27 02:45:39.847028 systemd[1]: iscsid.socket: Deactivated successfully. May 27 02:45:39.847119 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:45:39.848399 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 02:45:39.848478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:45:39.849762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 02:45:39.849888 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:45:39.851298 systemd[1]: ignition-files.service: Deactivated successfully. May 27 02:45:39.851401 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 02:45:39.853489 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 02:45:39.855684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 02:45:39.856738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 02:45:39.856864 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:45:39.858563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 02:45:39.858675 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:45:39.863725 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 02:45:39.868097 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 02:45:39.877540 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 02:45:39.884374 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 02:45:39.885337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 02:45:39.890026 ignition[1038]: INFO : Ignition 2.21.0 May 27 02:45:39.890026 ignition[1038]: INFO : Stage: umount May 27 02:45:39.891585 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:45:39.891585 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:45:39.893358 ignition[1038]: INFO : umount: umount passed May 27 02:45:39.893358 ignition[1038]: INFO : Ignition finished successfully May 27 02:45:39.893917 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 02:45:39.895815 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 02:45:39.897833 systemd[1]: Stopped target network.target - Network. May 27 02:45:39.898587 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 02:45:39.898659 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 02:45:39.899873 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 02:45:39.899916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 02:45:39.901227 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 02:45:39.901270 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 02:45:39.902538 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 02:45:39.902576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 02:45:39.904027 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 02:45:39.904084 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 02:45:39.905573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 02:45:39.906894 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 02:45:39.914536 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 02:45:39.914657 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 02:45:39.918342 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 02:45:39.919053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 02:45:39.919146 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:45:39.922585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 02:45:39.922805 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 02:45:39.923669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 02:45:39.926126 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 02:45:39.926564 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 02:45:39.928374 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 02:45:39.928416 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 02:45:39.930903 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 02:45:39.932442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 02:45:39.932499 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:45:39.934328 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:45:39.934382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:45:39.936983 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 02:45:39.937032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 02:45:39.938563 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:45:39.942205 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:45:39.951587 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 02:45:39.960118 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:45:39.962367 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 02:45:39.962473 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 02:45:39.964417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 02:45:39.964492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 02:45:39.965334 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 02:45:39.965364 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:45:39.966740 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 02:45:39.966789 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 02:45:39.969539 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 02:45:39.969595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 02:45:39.971914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 02:45:39.971978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:45:39.975513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 02:45:39.976527 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 02:45:39.976585 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:45:39.980670 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 02:45:39.980720 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:45:39.983827 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 02:45:39.983869 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:45:39.986855 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 02:45:39.986898 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:45:39.988830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:45:39.988878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:45:39.992373 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 02:45:39.992471 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 02:45:39.994127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 02:45:39.996236 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 02:45:40.019227 systemd[1]: Switching root. May 27 02:45:40.051782 systemd-journald[244]: Journal stopped May 27 02:45:40.891172 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 27 02:45:40.891232 kernel: SELinux: policy capability network_peer_controls=1 May 27 02:45:40.891244 kernel: SELinux: policy capability open_perms=1 May 27 02:45:40.891254 kernel: SELinux: policy capability extended_socket_class=1 May 27 02:45:40.891263 kernel: SELinux: policy capability always_check_network=0 May 27 02:45:40.891278 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 02:45:40.891288 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 02:45:40.891297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 02:45:40.891307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 02:45:40.891316 kernel: SELinux: policy capability userspace_initial_context=0 May 27 02:45:40.891327 kernel: audit: type=1403 audit(1748313940.230:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 02:45:40.891340 systemd[1]: Successfully loaded SELinux policy in 31.978ms. May 27 02:45:40.891356 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.369ms. May 27 02:45:40.891367 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:45:40.891377 systemd[1]: Detected virtualization kvm. May 27 02:45:40.891387 systemd[1]: Detected architecture arm64. May 27 02:45:40.891397 systemd[1]: Detected first boot. May 27 02:45:40.891407 systemd[1]: Initializing machine ID from VM UUID. May 27 02:45:40.891417 zram_generator::config[1085]: No configuration found. May 27 02:45:40.891429 kernel: NET: Registered PF_VSOCK protocol family May 27 02:45:40.891438 systemd[1]: Populated /etc with preset unit settings. May 27 02:45:40.891448 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 02:45:40.891458 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 02:45:40.891475 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 02:45:40.891484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 02:45:40.891495 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 02:45:40.891505 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 02:45:40.891516 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 02:45:40.891525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 02:45:40.891535 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 02:45:40.891545 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 02:45:40.891556 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 02:45:40.891566 systemd[1]: Created slice user.slice - User and Session Slice. May 27 02:45:40.891576 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:45:40.891586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:45:40.891596 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 02:45:40.891608 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 02:45:40.891619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 02:45:40.891629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:45:40.891648 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 02:45:40.891660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:45:40.891671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:45:40.891681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 02:45:40.891693 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 02:45:40.891704 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 02:45:40.891714 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 02:45:40.891724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:45:40.891734 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:45:40.891744 systemd[1]: Reached target slices.target - Slice Units. May 27 02:45:40.891754 systemd[1]: Reached target swap.target - Swaps. May 27 02:45:40.891764 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 02:45:40.891775 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 02:45:40.891787 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 02:45:40.891797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:45:40.891807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:45:40.891817 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:45:40.891827 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 02:45:40.891839 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 02:45:40.891849 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 02:45:40.891862 systemd[1]: Mounting media.mount - External Media Directory... May 27 02:45:40.891874 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 02:45:40.891886 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 02:45:40.891896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 02:45:40.891906 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 02:45:40.891916 systemd[1]: Reached target machines.target - Containers. May 27 02:45:40.891927 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 02:45:40.891982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:45:40.891994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:45:40.892005 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 02:45:40.892016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:45:40.892028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:45:40.892038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:45:40.892049 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 02:45:40.892059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:45:40.892069 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 02:45:40.892079 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 02:45:40.892090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 02:45:40.892100 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 02:45:40.892116 systemd[1]: Stopped systemd-fsck-usr.service. May 27 02:45:40.892126 kernel: fuse: init (API version 7.41) May 27 02:45:40.892137 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:45:40.892147 kernel: ACPI: bus type drm_connector registered May 27 02:45:40.892157 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:45:40.892167 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:45:40.892176 kernel: loop: module loaded May 27 02:45:40.892186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:45:40.892196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 02:45:40.892208 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 02:45:40.892218 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:45:40.892228 systemd[1]: verity-setup.service: Deactivated successfully. May 27 02:45:40.892238 systemd[1]: Stopped verity-setup.service. May 27 02:45:40.892248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 02:45:40.892259 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 02:45:40.892270 systemd[1]: Mounted media.mount - External Media Directory. May 27 02:45:40.892280 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 02:45:40.892290 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 02:45:40.892326 systemd-journald[1164]: Collecting audit messages is disabled. May 27 02:45:40.892357 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 02:45:40.892371 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 02:45:40.892388 systemd-journald[1164]: Journal started May 27 02:45:40.892409 systemd-journald[1164]: Runtime Journal (/run/log/journal/c2c48e6077244b36b612ebfcff59e149) is 6M, max 48.5M, 42.4M free. May 27 02:45:40.654209 systemd[1]: Queued start job for default target multi-user.target. May 27 02:45:40.676627 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 02:45:40.677038 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 02:45:40.897512 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:45:40.898332 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:45:40.899512 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 02:45:40.900958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 02:45:40.902136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:45:40.902306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:45:40.903436 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:45:40.904978 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:45:40.906050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:45:40.906201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:45:40.907401 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 02:45:40.907566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 02:45:40.908767 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:45:40.908915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:45:40.910042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:45:40.911196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:45:40.912624 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 02:45:40.914003 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 02:45:40.926368 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:45:40.928691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 02:45:40.930704 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 02:45:40.931630 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 02:45:40.931668 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:45:40.933379 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 02:45:40.943146 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 02:45:40.944073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:45:40.945374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 02:45:40.947334 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 02:45:40.948265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:45:40.950693 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 02:45:40.951649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:45:40.952683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:45:40.956777 systemd-journald[1164]: Time spent on flushing to /var/log/journal/c2c48e6077244b36b612ebfcff59e149 is 24.499ms for 888 entries. May 27 02:45:40.956777 systemd-journald[1164]: System Journal (/var/log/journal/c2c48e6077244b36b612ebfcff59e149) is 8M, max 195.6M, 187.6M free. May 27 02:45:40.993685 systemd-journald[1164]: Received client request to flush runtime journal. May 27 02:45:40.993724 kernel: loop0: detected capacity change from 0 to 211168 May 27 02:45:40.958089 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 02:45:40.960158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:45:40.963447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:45:40.964978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 02:45:40.968129 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 02:45:40.976398 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 02:45:40.977647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 02:45:40.979764 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 02:45:40.983249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:45:40.996753 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 02:45:41.001964 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 02:45:41.006251 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 27 02:45:41.006267 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 27 02:45:41.011074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:45:41.013889 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 02:45:41.022076 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 02:45:41.032956 kernel: loop1: detected capacity change from 0 to 107312 May 27 02:45:41.040367 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 02:45:41.042817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:45:41.061964 kernel: loop2: detected capacity change from 0 to 138376 May 27 02:45:41.068242 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. May 27 02:45:41.068532 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. May 27 02:45:41.073321 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:45:41.088961 kernel: loop3: detected capacity change from 0 to 211168 May 27 02:45:41.095969 kernel: loop4: detected capacity change from 0 to 107312 May 27 02:45:41.101017 kernel: loop5: detected capacity change from 0 to 138376 May 27 02:45:41.107483 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 02:45:41.107918 (sd-merge)[1226]: Merged extensions into '/usr'. May 27 02:45:41.111452 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 27 02:45:41.111469 systemd[1]: Reloading... May 27 02:45:41.177177 zram_generator::config[1252]: No configuration found. May 27 02:45:41.237044 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 02:45:41.258319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:45:41.320072 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 02:45:41.320329 systemd[1]: Reloading finished in 208 ms. May 27 02:45:41.343520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 02:45:41.344668 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 02:45:41.358249 systemd[1]: Starting ensure-sysext.service... May 27 02:45:41.359835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:45:41.370969 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... May 27 02:45:41.370983 systemd[1]: Reloading... May 27 02:45:41.380760 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 02:45:41.380797 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 02:45:41.381013 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 02:45:41.381190 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 02:45:41.381792 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 02:45:41.382009 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 27 02:45:41.382055 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 27 02:45:41.384458 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:45:41.384472 systemd-tmpfiles[1287]: Skipping /boot May 27 02:45:41.393552 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:45:41.393567 systemd-tmpfiles[1287]: Skipping /boot May 27 02:45:41.419993 zram_generator::config[1314]: No configuration found. May 27 02:45:41.505324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:45:41.567226 systemd[1]: Reloading finished in 195 ms. May 27 02:45:41.590808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 02:45:41.596225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:45:41.607080 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:45:41.609509 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 02:45:41.611574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 02:45:41.617144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:45:41.620494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:45:41.623092 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 02:45:41.630369 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 02:45:41.632733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:45:41.638870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:45:41.641272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:45:41.645403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:45:41.646828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:45:41.646955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:45:41.649962 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 02:45:41.651866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:45:41.652094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:45:41.653630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:45:41.653812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:45:41.655714 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:45:41.655852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:45:41.665457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:45:41.667282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:45:41.671068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:45:41.673024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:45:41.673963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:45:41.674113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:45:41.681404 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 02:45:41.686968 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 02:45:41.687699 systemd-udevd[1355]: Using default interface naming scheme 'v255'. May 27 02:45:41.688712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:45:41.688850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:45:41.690581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:45:41.690745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:45:41.692477 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:45:41.695728 augenrules[1387]: No rules May 27 02:45:41.702227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:45:41.704612 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 02:45:41.706170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:45:41.709672 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:45:41.709862 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:45:41.714434 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 02:45:41.719594 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 02:45:41.734498 systemd[1]: Finished ensure-sysext.service. May 27 02:45:41.741597 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:45:41.744226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:45:41.746068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:45:41.747757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:45:41.750980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:45:41.758700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:45:41.759686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:45:41.759730 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:45:41.762166 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:45:41.765127 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 02:45:41.765969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:45:41.766473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:45:41.767222 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:45:41.768387 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:45:41.768555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:45:41.771228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:45:41.771389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:45:41.772629 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:45:41.772786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:45:41.777512 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 02:45:41.779105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:45:41.779162 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:45:41.783775 augenrules[1432]: /sbin/augenrules: No change May 27 02:45:41.791871 augenrules[1464]: No rules May 27 02:45:41.794362 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:45:41.794575 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:45:41.848896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:45:41.851391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 02:45:41.900735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 02:45:41.902316 systemd-resolved[1354]: Positive Trust Anchors: May 27 02:45:41.902331 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:45:41.902363 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:45:41.916599 systemd-resolved[1354]: Defaulting to hostname 'linux'. May 27 02:45:41.918166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:45:41.921182 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:45:41.939121 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 02:45:41.940274 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:45:41.941184 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 02:45:41.942318 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 02:45:41.943521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 02:45:41.944538 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 02:45:41.944579 systemd[1]: Reached target paths.target - Path Units. May 27 02:45:41.945300 systemd[1]: Reached target time-set.target - System Time Set. May 27 02:45:41.946245 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 02:45:41.947492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 02:45:41.948704 systemd[1]: Reached target timers.target - Timer Units. May 27 02:45:41.950774 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 02:45:41.953276 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 02:45:41.957354 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 02:45:41.959463 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 02:45:41.960493 systemd-networkd[1437]: lo: Link UP May 27 02:45:41.960508 systemd-networkd[1437]: lo: Gained carrier May 27 02:45:41.961507 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 02:45:41.962382 systemd-networkd[1437]: Enumeration completed May 27 02:45:41.974718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 02:45:41.975614 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:45:41.975623 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:45:41.976207 systemd-networkd[1437]: eth0: Link UP May 27 02:45:41.976344 systemd-networkd[1437]: eth0: Gained carrier May 27 02:45:41.976363 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:45:41.977316 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 02:45:41.979982 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:45:41.981176 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 02:45:41.988499 systemd[1]: Reached target network.target - Network. May 27 02:45:41.989389 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:45:41.990154 systemd[1]: Reached target basic.target - Basic System. May 27 02:45:41.990960 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 02:45:41.990996 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 02:45:41.992021 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:45:41.992585 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. May 27 02:45:41.992780 systemd[1]: Starting containerd.service - containerd container runtime... May 27 02:45:41.994424 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 02:45:41.994477 systemd-timesyncd[1439]: Initial clock synchronization to Tue 2025-05-27 02:45:41.795330 UTC. May 27 02:45:41.994926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 02:45:41.996930 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 02:45:42.008080 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 02:45:42.011118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 02:45:42.012120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 02:45:42.013265 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 02:45:42.016991 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 02:45:42.021113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 02:45:42.024157 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 02:45:42.024599 jq[1497]: false May 27 02:45:42.027794 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 02:45:42.030235 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 02:45:42.036524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 02:45:42.039160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:45:42.041269 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 02:45:42.041812 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 02:45:42.043156 systemd[1]: Starting update-engine.service - Update Engine... May 27 02:45:42.046666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 02:45:42.047404 extend-filesystems[1498]: Found loop3 May 27 02:45:42.049922 extend-filesystems[1498]: Found loop4 May 27 02:45:42.049922 extend-filesystems[1498]: Found loop5 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda May 27 02:45:42.049922 extend-filesystems[1498]: Found vda1 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda2 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda3 May 27 02:45:42.049922 extend-filesystems[1498]: Found usr May 27 02:45:42.049922 extend-filesystems[1498]: Found vda4 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda6 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda7 May 27 02:45:42.049922 extend-filesystems[1498]: Found vda9 May 27 02:45:42.049922 extend-filesystems[1498]: Checking size of /dev/vda9 May 27 02:45:42.061014 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 02:45:42.065659 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 02:45:42.065842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 02:45:42.066148 systemd[1]: motdgen.service: Deactivated successfully. May 27 02:45:42.066299 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 02:45:42.068335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 02:45:42.068492 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 02:45:42.079193 jq[1517]: true May 27 02:45:42.097112 jq[1530]: true May 27 02:45:42.113815 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 02:45:42.116634 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 02:45:42.120966 extend-filesystems[1498]: Resized partition /dev/vda9 May 27 02:45:42.122503 tar[1522]: linux-arm64/LICENSE May 27 02:45:42.122785 tar[1522]: linux-arm64/helm May 27 02:45:42.138253 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) May 27 02:45:42.169959 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 02:45:42.183153 dbus-daemon[1495]: [system] SELinux support is enabled May 27 02:45:42.183554 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 02:45:42.185154 update_engine[1515]: I20250527 02:45:42.184371 1515 main.cc:92] Flatcar Update Engine starting May 27 02:45:42.189636 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 02:45:42.189718 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 02:45:42.191846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 02:45:42.191877 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 02:45:42.212947 systemd[1]: Started update-engine.service - Update Engine. May 27 02:45:42.213121 update_engine[1515]: I20250527 02:45:42.212747 1515 update_check_scheduler.cc:74] Next update check in 7m53s May 27 02:45:42.215539 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 02:45:42.227047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:45:42.231332 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) May 27 02:45:42.232846 systemd-logind[1506]: New seat seat0. May 27 02:45:42.239952 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 02:45:42.240229 systemd[1]: Started systemd-logind.service - User Login Management. May 27 02:45:42.254775 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 02:45:42.254775 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 02:45:42.254775 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 02:45:42.260303 bash[1556]: Updated "/home/core/.ssh/authorized_keys" May 27 02:45:42.258309 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 02:45:42.260538 extend-filesystems[1498]: Resized filesystem in /dev/vda9 May 27 02:45:42.260031 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 02:45:42.264767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 02:45:42.268231 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 02:45:42.310058 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 02:45:42.370544 containerd[1531]: time="2025-05-27T02:45:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 02:45:42.372942 containerd[1531]: time="2025-05-27T02:45:42.371609237Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 02:45:42.380843 containerd[1531]: time="2025-05-27T02:45:42.380788851Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.687µs" May 27 02:45:42.380843 containerd[1531]: time="2025-05-27T02:45:42.380833278Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 02:45:42.380843 containerd[1531]: time="2025-05-27T02:45:42.380853170Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 02:45:42.381078 containerd[1531]: time="2025-05-27T02:45:42.381055216Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 02:45:42.381104 containerd[1531]: time="2025-05-27T02:45:42.381081622Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 02:45:42.381146 containerd[1531]: time="2025-05-27T02:45:42.381107911Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:45:42.381176 containerd[1531]: time="2025-05-27T02:45:42.381158189Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:45:42.381176 containerd[1531]: time="2025-05-27T02:45:42.381172893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:45:42.381427 containerd[1531]: time="2025-05-27T02:45:42.381403179Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:45:42.381427 containerd[1531]: time="2025-05-27T02:45:42.381423578Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:45:42.381469 containerd[1531]: time="2025-05-27T02:45:42.381435670Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:45:42.381469 containerd[1531]: time="2025-05-27T02:45:42.381443510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 02:45:42.381531 containerd[1531]: time="2025-05-27T02:45:42.381515045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 02:45:42.381723 containerd[1531]: time="2025-05-27T02:45:42.381702697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:45:42.381751 containerd[1531]: time="2025-05-27T02:45:42.381735149Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:45:42.381751 containerd[1531]: time="2025-05-27T02:45:42.381744706Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 02:45:42.381796 containerd[1531]: time="2025-05-27T02:45:42.381783438Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 02:45:42.382181 containerd[1531]: time="2025-05-27T02:45:42.382147821Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 02:45:42.382280 containerd[1531]: time="2025-05-27T02:45:42.382262145Z" level=info msg="metadata content store policy set" policy=shared May 27 02:45:42.423466 containerd[1531]: time="2025-05-27T02:45:42.423412963Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 02:45:42.423564 containerd[1531]: time="2025-05-27T02:45:42.423538871Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 02:45:42.423613 containerd[1531]: time="2025-05-27T02:45:42.423596091Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 02:45:42.423660 containerd[1531]: time="2025-05-27T02:45:42.423613721Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 02:45:42.423660 containerd[1531]: time="2025-05-27T02:45:42.423628816Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 02:45:42.423660 containerd[1531]: time="2025-05-27T02:45:42.423638450Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 02:45:42.423713 containerd[1531]: time="2025-05-27T02:45:42.423660527Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 02:45:42.423713 containerd[1531]: time="2025-05-27T02:45:42.423673633Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 02:45:42.423713 containerd[1531]: time="2025-05-27T02:45:42.423684827Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 02:45:42.423713 containerd[1531]: time="2025-05-27T02:45:42.423695710Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 02:45:42.423713 containerd[1531]: time="2025-05-27T02:45:42.423706826Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 02:45:42.423793 containerd[1531]: time="2025-05-27T02:45:42.423730346Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 02:45:42.423945 containerd[1531]: time="2025-05-27T02:45:42.423911173Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 02:45:42.423984 containerd[1531]: time="2025-05-27T02:45:42.423961879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 02:45:42.424003 containerd[1531]: time="2025-05-27T02:45:42.423985672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 02:45:42.424003 containerd[1531]: time="2025-05-27T02:45:42.423998466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 02:45:42.424035 containerd[1531]: time="2025-05-27T02:45:42.424008607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 02:45:42.424035 containerd[1531]: time="2025-05-27T02:45:42.424026471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 02:45:42.424076 containerd[1531]: time="2025-05-27T02:45:42.424038719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 02:45:42.424076 containerd[1531]: time="2025-05-27T02:45:42.424049133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 02:45:42.424076 containerd[1531]: time="2025-05-27T02:45:42.424059860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 02:45:42.424076 containerd[1531]: time="2025-05-27T02:45:42.424069923Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 02:45:42.424145 containerd[1531]: time="2025-05-27T02:45:42.424080883Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 02:45:42.424386 containerd[1531]: time="2025-05-27T02:45:42.424363435Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 02:45:42.424414 containerd[1531]: time="2025-05-27T02:45:42.424390231Z" level=info msg="Start snapshots syncer" May 27 02:45:42.424451 containerd[1531]: time="2025-05-27T02:45:42.424429665Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 02:45:42.424834 containerd[1531]: time="2025-05-27T02:45:42.424718029Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 02:45:42.425055 containerd[1531]: time="2025-05-27T02:45:42.424848578Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 02:45:42.425055 containerd[1531]: time="2025-05-27T02:45:42.425008186Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 02:45:42.425261 containerd[1531]: time="2025-05-27T02:45:42.425166702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 02:45:42.425261 containerd[1531]: time="2025-05-27T02:45:42.425235624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 02:45:42.425261 containerd[1531]: time="2025-05-27T02:45:42.425252045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 02:45:42.425326 containerd[1531]: time="2025-05-27T02:45:42.425264917Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 02:45:42.425326 containerd[1531]: time="2025-05-27T02:45:42.425278217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 02:45:42.425367 containerd[1531]: time="2025-05-27T02:45:42.425289373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 02:45:42.425367 containerd[1531]: time="2025-05-27T02:45:42.425345813Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 02:45:42.425403 containerd[1531]: time="2025-05-27T02:45:42.425373233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 02:45:42.425403 containerd[1531]: time="2025-05-27T02:45:42.425385013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 02:45:42.425460 containerd[1531]: time="2025-05-27T02:45:42.425396324Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 02:45:42.425502 containerd[1531]: time="2025-05-27T02:45:42.425487557Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:45:42.425533 containerd[1531]: time="2025-05-27T02:45:42.425519307Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:45:42.425561 containerd[1531]: time="2025-05-27T02:45:42.425533778Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:45:42.425611 containerd[1531]: time="2025-05-27T02:45:42.425543880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:45:42.425611 containerd[1531]: time="2025-05-27T02:45:42.425608082Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 02:45:42.425657 containerd[1531]: time="2025-05-27T02:45:42.425619978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 02:45:42.425657 containerd[1531]: time="2025-05-27T02:45:42.425630822Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 02:45:42.425729 containerd[1531]: time="2025-05-27T02:45:42.425714526Z" level=info msg="runtime interface created" May 27 02:45:42.425729 containerd[1531]: time="2025-05-27T02:45:42.425721547Z" level=info msg="created NRI interface" May 27 02:45:42.425763 containerd[1531]: time="2025-05-27T02:45:42.425730284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 02:45:42.425763 containerd[1531]: time="2025-05-27T02:45:42.425746783Z" level=info msg="Connect containerd service" May 27 02:45:42.425858 containerd[1531]: time="2025-05-27T02:45:42.425836573Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 02:45:42.428977 containerd[1531]: time="2025-05-27T02:45:42.428926933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:45:42.532307 containerd[1531]: time="2025-05-27T02:45:42.532204675Z" level=info msg="Start subscribing containerd event" May 27 02:45:42.532307 containerd[1531]: time="2025-05-27T02:45:42.532264782Z" level=info msg="Start recovering state" May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532352348Z" level=info msg="Start event monitor" May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532368106Z" level=info msg="Start cni network conf syncer for default" May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532376258Z" level=info msg="Start streaming server" May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532385463Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532391977Z" level=info msg="runtime interface starting up..." May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532397867Z" level=info msg="starting plugins..." May 27 02:45:42.532420 containerd[1531]: time="2025-05-27T02:45:42.532412143Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 02:45:42.533033 containerd[1531]: time="2025-05-27T02:45:42.533008801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 02:45:42.533080 containerd[1531]: time="2025-05-27T02:45:42.533067737Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 02:45:42.533215 systemd[1]: Started containerd.service - containerd container runtime. May 27 02:45:42.534239 containerd[1531]: time="2025-05-27T02:45:42.534216822Z" level=info msg="containerd successfully booted in 0.164992s" May 27 02:45:42.587093 tar[1522]: linux-arm64/README.md May 27 02:45:42.608996 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 02:45:42.681340 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 02:45:42.700736 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 02:45:42.704301 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 02:45:42.723582 systemd[1]: issuegen.service: Deactivated successfully. May 27 02:45:42.723805 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 02:45:42.726648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 02:45:42.759713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 02:45:42.762435 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 02:45:42.764385 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 02:45:42.765555 systemd[1]: Reached target getty.target - Login Prompts. May 27 02:45:43.818069 systemd-networkd[1437]: eth0: Gained IPv6LL May 27 02:45:43.820365 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 02:45:43.821836 systemd[1]: Reached target network-online.target - Network is Online. May 27 02:45:43.825200 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 02:45:43.827086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:45:43.837327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 02:45:43.850151 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 02:45:43.851053 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 02:45:43.852336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 02:45:43.858835 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 02:45:44.361638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:45:44.363003 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 02:45:44.364146 systemd[1]: Startup finished in 2.118s (kernel) + 5.615s (initrd) + 4.171s (userspace) = 11.905s. May 27 02:45:44.364821 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:45:44.777846 kubelet[1634]: E0527 02:45:44.777729 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:45:44.780078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:45:44.780204 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:45:44.782005 systemd[1]: kubelet.service: Consumed 822ms CPU time, 257.4M memory peak. May 27 02:45:47.897191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 02:45:47.898217 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). May 27 02:45:48.039667 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.041426 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.056547 systemd-logind[1506]: New session 1 of user core. May 27 02:45:48.057278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 02:45:48.058949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 02:45:48.083945 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 02:45:48.086514 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 02:45:48.100161 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 02:45:48.102470 systemd-logind[1506]: New session c1 of user core. May 27 02:45:48.212640 systemd[1652]: Queued start job for default target default.target. May 27 02:45:48.223950 systemd[1652]: Created slice app.slice - User Application Slice. May 27 02:45:48.223977 systemd[1652]: Reached target paths.target - Paths. May 27 02:45:48.224024 systemd[1652]: Reached target timers.target - Timers. May 27 02:45:48.225217 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 02:45:48.234951 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 02:45:48.235022 systemd[1652]: Reached target sockets.target - Sockets. May 27 02:45:48.235070 systemd[1652]: Reached target basic.target - Basic System. May 27 02:45:48.235098 systemd[1652]: Reached target default.target - Main User Target. May 27 02:45:48.235124 systemd[1652]: Startup finished in 126ms. May 27 02:45:48.235253 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 02:45:48.236711 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 02:45:48.305190 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:59354.service - OpenSSH per-connection server daemon (10.0.0.1:59354). May 27 02:45:48.357655 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 59354 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.359002 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.364179 systemd-logind[1506]: New session 2 of user core. May 27 02:45:48.371140 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 02:45:48.421731 sshd[1665]: Connection closed by 10.0.0.1 port 59354 May 27 02:45:48.422240 sshd-session[1663]: pam_unix(sshd:session): session closed for user core May 27 02:45:48.435916 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:59354.service: Deactivated successfully. May 27 02:45:48.437594 systemd[1]: session-2.scope: Deactivated successfully. May 27 02:45:48.438416 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. May 27 02:45:48.440799 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:59366.service - OpenSSH per-connection server daemon (10.0.0.1:59366). May 27 02:45:48.441483 systemd-logind[1506]: Removed session 2. May 27 02:45:48.494234 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 59366 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.495384 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.499777 systemd-logind[1506]: New session 3 of user core. May 27 02:45:48.512103 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 02:45:48.562730 sshd[1673]: Connection closed by 10.0.0.1 port 59366 May 27 02:45:48.563178 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 27 02:45:48.573986 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:59366.service: Deactivated successfully. May 27 02:45:48.575476 systemd[1]: session-3.scope: Deactivated successfully. May 27 02:45:48.576084 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. May 27 02:45:48.578185 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:59370.service - OpenSSH per-connection server daemon (10.0.0.1:59370). May 27 02:45:48.579034 systemd-logind[1506]: Removed session 3. May 27 02:45:48.635230 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 59370 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.636590 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.640158 systemd-logind[1506]: New session 4 of user core. May 27 02:45:48.650112 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 02:45:48.701989 sshd[1681]: Connection closed by 10.0.0.1 port 59370 May 27 02:45:48.702329 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 27 02:45:48.726288 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:59370.service: Deactivated successfully. May 27 02:45:48.727700 systemd[1]: session-4.scope: Deactivated successfully. May 27 02:45:48.728427 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. May 27 02:45:48.731002 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:59374.service - OpenSSH per-connection server daemon (10.0.0.1:59374). May 27 02:45:48.731632 systemd-logind[1506]: Removed session 4. May 27 02:45:48.780275 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 59374 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.781437 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.786103 systemd-logind[1506]: New session 5 of user core. May 27 02:45:48.798096 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 02:45:48.862921 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 02:45:48.863197 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:45:48.877471 sudo[1690]: pam_unix(sudo:session): session closed for user root May 27 02:45:48.884403 sshd[1689]: Connection closed by 10.0.0.1 port 59374 May 27 02:45:48.884796 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 27 02:45:48.898920 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:59374.service: Deactivated successfully. May 27 02:45:48.900222 systemd[1]: session-5.scope: Deactivated successfully. May 27 02:45:48.900904 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. May 27 02:45:48.903230 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:59388.service - OpenSSH per-connection server daemon (10.0.0.1:59388). May 27 02:45:48.903856 systemd-logind[1506]: Removed session 5. May 27 02:45:48.954674 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 59388 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:48.955868 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:48.959983 systemd-logind[1506]: New session 6 of user core. May 27 02:45:48.972134 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 02:45:49.021419 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 02:45:49.022004 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:45:49.026269 sudo[1700]: pam_unix(sudo:session): session closed for user root May 27 02:45:49.030581 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 02:45:49.031562 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:45:49.039178 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:45:49.077972 augenrules[1722]: No rules May 27 02:45:49.079310 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:45:49.079557 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:45:49.080405 sudo[1699]: pam_unix(sudo:session): session closed for user root May 27 02:45:49.087096 sshd[1698]: Connection closed by 10.0.0.1 port 59388 May 27 02:45:49.087512 sshd-session[1696]: pam_unix(sshd:session): session closed for user core May 27 02:45:49.100005 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:59388.service: Deactivated successfully. May 27 02:45:49.101409 systemd[1]: session-6.scope: Deactivated successfully. May 27 02:45:49.103180 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. May 27 02:45:49.105507 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:59404.service - OpenSSH per-connection server daemon (10.0.0.1:59404). May 27 02:45:49.106141 systemd-logind[1506]: Removed session 6. May 27 02:45:49.158839 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 59404 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:45:49.160149 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:45:49.164653 systemd-logind[1506]: New session 7 of user core. May 27 02:45:49.171110 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 02:45:49.221204 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 02:45:49.221455 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:45:49.728050 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 02:45:49.748266 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 02:45:50.005774 dockerd[1755]: time="2025-05-27T02:45:50.005655179Z" level=info msg="Starting up" May 27 02:45:50.006793 dockerd[1755]: time="2025-05-27T02:45:50.006772149Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 02:45:50.121791 dockerd[1755]: time="2025-05-27T02:45:50.121749655Z" level=info msg="Loading containers: start." May 27 02:45:50.132638 kernel: Initializing XFRM netlink socket May 27 02:45:50.327471 systemd-networkd[1437]: docker0: Link UP May 27 02:45:50.330576 dockerd[1755]: time="2025-05-27T02:45:50.330527094Z" level=info msg="Loading containers: done." May 27 02:45:50.345771 dockerd[1755]: time="2025-05-27T02:45:50.345718876Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 02:45:50.345912 dockerd[1755]: time="2025-05-27T02:45:50.345807868Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 02:45:50.345912 dockerd[1755]: time="2025-05-27T02:45:50.345909552Z" level=info msg="Initializing buildkit" May 27 02:45:50.369948 dockerd[1755]: time="2025-05-27T02:45:50.369890477Z" level=info msg="Completed buildkit initialization" May 27 02:45:50.376460 dockerd[1755]: time="2025-05-27T02:45:50.376419110Z" level=info msg="Daemon has completed initialization" May 27 02:45:50.376583 dockerd[1755]: time="2025-05-27T02:45:50.376490891Z" level=info msg="API listen on /run/docker.sock" May 27 02:45:50.376683 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 02:45:50.887681 containerd[1531]: time="2025-05-27T02:45:50.887627540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 02:45:51.104068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4090009718-merged.mount: Deactivated successfully. May 27 02:45:51.514328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983443700.mount: Deactivated successfully. May 27 02:45:52.649994 containerd[1531]: time="2025-05-27T02:45:52.649829473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:52.650511 containerd[1531]: time="2025-05-27T02:45:52.650476808Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349352" May 27 02:45:52.651007 containerd[1531]: time="2025-05-27T02:45:52.650982914Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:52.653676 containerd[1531]: time="2025-05-27T02:45:52.653645375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:52.654575 containerd[1531]: time="2025-05-27T02:45:52.654538730Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 1.766867075s" May 27 02:45:52.654616 containerd[1531]: time="2025-05-27T02:45:52.654577554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 27 02:45:52.655715 containerd[1531]: time="2025-05-27T02:45:52.655683906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 02:45:54.071178 containerd[1531]: time="2025-05-27T02:45:54.071127022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:54.071959 containerd[1531]: time="2025-05-27T02:45:54.071897980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531737" May 27 02:45:54.072504 containerd[1531]: time="2025-05-27T02:45:54.072478019Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:54.074701 containerd[1531]: time="2025-05-27T02:45:54.074672965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:54.075725 containerd[1531]: time="2025-05-27T02:45:54.075691158Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.419975369s" May 27 02:45:54.075913 containerd[1531]: time="2025-05-27T02:45:54.075814417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 27 02:45:54.076254 containerd[1531]: time="2025-05-27T02:45:54.076223397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 02:45:55.030562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 02:45:55.032821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:45:55.216631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:45:55.220312 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:45:55.253626 kubelet[2033]: E0527 02:45:55.253566 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:45:55.256812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:45:55.256965 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:45:55.258053 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. May 27 02:45:55.297435 containerd[1531]: time="2025-05-27T02:45:55.297083384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:55.298273 containerd[1531]: time="2025-05-27T02:45:55.298023962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293733" May 27 02:45:55.298916 containerd[1531]: time="2025-05-27T02:45:55.298891143Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:55.301372 containerd[1531]: time="2025-05-27T02:45:55.301328450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:55.302582 containerd[1531]: time="2025-05-27T02:45:55.302494396Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.226218297s" May 27 02:45:55.302582 containerd[1531]: time="2025-05-27T02:45:55.302530955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 27 02:45:55.303153 containerd[1531]: time="2025-05-27T02:45:55.302997779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 02:45:56.287754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435928120.mount: Deactivated successfully. May 27 02:45:56.670567 containerd[1531]: time="2025-05-27T02:45:56.669884944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:56.670957 containerd[1531]: time="2025-05-27T02:45:56.670841779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196006" May 27 02:45:56.671631 containerd[1531]: time="2025-05-27T02:45:56.671603406Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:56.673672 containerd[1531]: time="2025-05-27T02:45:56.673629961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:56.674070 containerd[1531]: time="2025-05-27T02:45:56.674033965Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.371002648s" May 27 02:45:56.674070 containerd[1531]: time="2025-05-27T02:45:56.674068512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 27 02:45:56.674691 containerd[1531]: time="2025-05-27T02:45:56.674478573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 02:45:57.228997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115925832.mount: Deactivated successfully. May 27 02:45:58.601489 containerd[1531]: time="2025-05-27T02:45:58.601435667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:58.602083 containerd[1531]: time="2025-05-27T02:45:58.602042519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" May 27 02:45:58.602611 containerd[1531]: time="2025-05-27T02:45:58.602582847Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:58.607971 containerd[1531]: time="2025-05-27T02:45:58.607908200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:45:58.609313 containerd[1531]: time="2025-05-27T02:45:58.609166174Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.934662809s" May 27 02:45:58.609313 containerd[1531]: time="2025-05-27T02:45:58.609232060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 27 02:45:58.609728 containerd[1531]: time="2025-05-27T02:45:58.609700680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 02:45:59.031630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763099053.mount: Deactivated successfully. May 27 02:45:59.034852 containerd[1531]: time="2025-05-27T02:45:59.034811504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:45:59.035509 containerd[1531]: time="2025-05-27T02:45:59.035333160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 02:45:59.036189 containerd[1531]: time="2025-05-27T02:45:59.036157316Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:45:59.038110 containerd[1531]: time="2025-05-27T02:45:59.038071863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:45:59.039035 containerd[1531]: time="2025-05-27T02:45:59.039006096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.18784ms" May 27 02:45:59.039084 containerd[1531]: time="2025-05-27T02:45:59.039034862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 02:45:59.039511 containerd[1531]: time="2025-05-27T02:45:59.039480793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 02:46:01.835068 containerd[1531]: time="2025-05-27T02:46:01.835018326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:01.837495 containerd[1531]: time="2025-05-27T02:46:01.837464224Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230165" May 27 02:46:01.844551 containerd[1531]: time="2025-05-27T02:46:01.844518636Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:01.852691 containerd[1531]: time="2025-05-27T02:46:01.852606371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:01.853784 containerd[1531]: time="2025-05-27T02:46:01.853748799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.814228023s" May 27 02:46:01.853784 containerd[1531]: time="2025-05-27T02:46:01.853785766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 27 02:46:05.507517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 02:46:05.508966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:46:05.639737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:05.643203 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:46:05.672993 kubelet[2148]: E0527 02:46:05.672926 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:46:05.675658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:46:05.675889 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:46:05.676451 systemd[1]: kubelet.service: Consumed 127ms CPU time, 107.3M memory peak. May 27 02:46:07.705079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:07.705564 systemd[1]: kubelet.service: Consumed 127ms CPU time, 107.3M memory peak. May 27 02:46:07.707840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:46:07.727270 systemd[1]: Reload requested from client PID 2164 ('systemctl') (unit session-7.scope)... May 27 02:46:07.727286 systemd[1]: Reloading... May 27 02:46:07.799980 zram_generator::config[2210]: No configuration found. May 27 02:46:07.899781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:46:07.986002 systemd[1]: Reloading finished in 258 ms. May 27 02:46:08.043478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 02:46:08.043558 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 02:46:08.043799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:08.043841 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. May 27 02:46:08.045368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:46:08.186561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:08.197248 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:46:08.232205 kubelet[2252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:46:08.232205 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:46:08.232205 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:46:08.232527 kubelet[2252]: I0527 02:46:08.232266 2252 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:46:09.158715 kubelet[2252]: I0527 02:46:09.158671 2252 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 02:46:09.158715 kubelet[2252]: I0527 02:46:09.158705 2252 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:46:09.158946 kubelet[2252]: I0527 02:46:09.158916 2252 server.go:956] "Client rotation is on, will bootstrap in background" May 27 02:46:09.187233 kubelet[2252]: E0527 02:46:09.187193 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 02:46:09.187634 kubelet[2252]: I0527 02:46:09.187622 2252 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:46:09.200133 kubelet[2252]: I0527 02:46:09.200107 2252 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:46:09.202734 kubelet[2252]: I0527 02:46:09.202668 2252 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:46:09.203702 kubelet[2252]: I0527 02:46:09.203658 2252 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:46:09.203864 kubelet[2252]: I0527 02:46:09.203701 2252 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:46:09.203962 kubelet[2252]: I0527 02:46:09.203923 2252 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:46:09.203962 kubelet[2252]: I0527 02:46:09.203948 2252 container_manager_linux.go:303] "Creating device plugin manager" May 27 02:46:09.204155 kubelet[2252]: I0527 02:46:09.204128 2252 state_mem.go:36] "Initialized new in-memory state store" May 27 02:46:09.206979 kubelet[2252]: I0527 02:46:09.206920 2252 kubelet.go:480] "Attempting to sync node with API server" May 27 02:46:09.206979 kubelet[2252]: I0527 02:46:09.206967 2252 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:46:09.207116 kubelet[2252]: I0527 02:46:09.206998 2252 kubelet.go:386] "Adding apiserver pod source" May 27 02:46:09.207116 kubelet[2252]: I0527 02:46:09.207012 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:46:09.208011 kubelet[2252]: I0527 02:46:09.207888 2252 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:46:09.208727 kubelet[2252]: I0527 02:46:09.208561 2252 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 02:46:09.208727 kubelet[2252]: W0527 02:46:09.208713 2252 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 02:46:09.210158 kubelet[2252]: E0527 02:46:09.210126 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 02:46:09.210895 kubelet[2252]: E0527 02:46:09.210862 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 02:46:09.211000 kubelet[2252]: I0527 02:46:09.210981 2252 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:46:09.211638 kubelet[2252]: I0527 02:46:09.211021 2252 server.go:1289] "Started kubelet" May 27 02:46:09.211638 kubelet[2252]: I0527 02:46:09.211179 2252 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:46:09.215667 kubelet[2252]: I0527 02:46:09.213457 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:46:09.215667 kubelet[2252]: I0527 02:46:09.214858 2252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:46:09.215667 kubelet[2252]: I0527 02:46:09.215120 2252 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:46:09.217978 kubelet[2252]: I0527 02:46:09.216708 2252 server.go:317] "Adding debug handlers to kubelet server" May 27 02:46:09.217978 kubelet[2252]: I0527 02:46:09.217630 2252 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:46:09.219011 kubelet[2252]: E0527 02:46:09.218971 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:46:09.219011 kubelet[2252]: I0527 02:46:09.219015 2252 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:46:09.219206 kubelet[2252]: I0527 02:46:09.219183 2252 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:46:09.219255 kubelet[2252]: I0527 02:46:09.219242 2252 reconciler.go:26] "Reconciler: start to sync state" May 27 02:46:09.219614 kubelet[2252]: E0527 02:46:09.219582 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 02:46:09.219688 kubelet[2252]: E0527 02:46:09.218270 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843424ac2acb60a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:46:09.210996234 +0000 UTC m=+1.010642442,LastTimestamp:2025-05-27 02:46:09.210996234 +0000 UTC m=+1.010642442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:46:09.219836 kubelet[2252]: E0527 02:46:09.219796 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" May 27 02:46:09.220922 kubelet[2252]: I0527 02:46:09.220880 2252 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:46:09.221134 kubelet[2252]: E0527 02:46:09.221108 2252 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:46:09.222372 kubelet[2252]: I0527 02:46:09.222346 2252 factory.go:223] Registration of the containerd container factory successfully May 27 02:46:09.222442 kubelet[2252]: I0527 02:46:09.222383 2252 factory.go:223] Registration of the systemd container factory successfully May 27 02:46:09.231314 kubelet[2252]: I0527 02:46:09.231262 2252 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 02:46:09.232492 kubelet[2252]: I0527 02:46:09.232472 2252 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:46:09.232492 kubelet[2252]: I0527 02:46:09.232486 2252 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:46:09.232782 kubelet[2252]: I0527 02:46:09.232504 2252 state_mem.go:36] "Initialized new in-memory state store" May 27 02:46:09.232948 kubelet[2252]: I0527 02:46:09.232914 2252 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 02:46:09.233027 kubelet[2252]: I0527 02:46:09.233011 2252 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 02:46:09.233248 kubelet[2252]: I0527 02:46:09.233232 2252 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:46:09.233320 kubelet[2252]: I0527 02:46:09.233311 2252 kubelet.go:2436] "Starting kubelet main sync loop" May 27 02:46:09.233414 kubelet[2252]: E0527 02:46:09.233397 2252 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:46:09.319265 kubelet[2252]: E0527 02:46:09.319237 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:46:09.333702 kubelet[2252]: E0527 02:46:09.333664 2252 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 02:46:09.380585 kubelet[2252]: I0527 02:46:09.380524 2252 policy_none.go:49] "None policy: Start" May 27 02:46:09.380585 kubelet[2252]: I0527 02:46:09.380555 2252 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:46:09.380585 kubelet[2252]: I0527 02:46:09.380569 2252 state_mem.go:35] "Initializing new in-memory state store" May 27 02:46:09.380964 kubelet[2252]: E0527 02:46:09.380902 2252 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 02:46:09.386296 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 02:46:09.399294 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 02:46:09.402771 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 02:46:09.419847 kubelet[2252]: E0527 02:46:09.419733 2252 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:46:09.421370 kubelet[2252]: E0527 02:46:09.421333 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" May 27 02:46:09.422866 kubelet[2252]: E0527 02:46:09.422824 2252 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 02:46:09.423457 kubelet[2252]: I0527 02:46:09.423272 2252 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:46:09.423457 kubelet[2252]: I0527 02:46:09.423288 2252 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:46:09.423568 kubelet[2252]: I0527 02:46:09.423479 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:46:09.424288 kubelet[2252]: E0527 02:46:09.424205 2252 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:46:09.424288 kubelet[2252]: E0527 02:46:09.424255 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 02:46:09.524995 kubelet[2252]: I0527 02:46:09.524955 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:46:09.525420 kubelet[2252]: E0527 02:46:09.525385 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 27 02:46:09.543065 systemd[1]: Created slice kubepods-burstable-podbb10e27277d728bf1c1219e03aea99a7.slice - libcontainer container kubepods-burstable-podbb10e27277d728bf1c1219e03aea99a7.slice. May 27 02:46:09.551881 kubelet[2252]: E0527 02:46:09.551680 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:09.554632 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 02:46:09.556071 kubelet[2252]: E0527 02:46:09.556043 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:09.558053 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 02:46:09.559405 kubelet[2252]: E0527 02:46:09.559384 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:09.620578 kubelet[2252]: I0527 02:46:09.620538 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:09.620765 kubelet[2252]: I0527 02:46:09.620672 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:09.620765 kubelet[2252]: I0527 02:46:09.620694 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:09.620919 kubelet[2252]: I0527 02:46:09.620845 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:09.620919 kubelet[2252]: I0527 02:46:09.620878 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:09.620919 kubelet[2252]: I0527 02:46:09.620897 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 02:46:09.621112 kubelet[2252]: I0527 02:46:09.621043 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:09.621112 kubelet[2252]: I0527 02:46:09.621076 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:09.621112 kubelet[2252]: I0527 02:46:09.621093 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:09.727455 kubelet[2252]: I0527 02:46:09.727361 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:46:09.727723 kubelet[2252]: E0527 02:46:09.727700 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 27 02:46:09.822475 kubelet[2252]: E0527 02:46:09.822426 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" May 27 02:46:09.853299 containerd[1531]: time="2025-05-27T02:46:09.853262820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bb10e27277d728bf1c1219e03aea99a7,Namespace:kube-system,Attempt:0,}" May 27 02:46:09.856866 containerd[1531]: time="2025-05-27T02:46:09.856737989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 02:46:09.860805 containerd[1531]: time="2025-05-27T02:46:09.860773859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 02:46:09.876139 containerd[1531]: time="2025-05-27T02:46:09.876091895Z" level=info msg="connecting to shim e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1" address="unix:///run/containerd/s/2288c6155918940b7d5e06c2ec03a55511de6a49e78e21f4bf95b5d20454797c" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:09.884964 containerd[1531]: time="2025-05-27T02:46:09.884812115Z" level=info msg="connecting to shim c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700" address="unix:///run/containerd/s/b1af39dbf0a4b1212d526fb571c36023b3fa604e3947f57706b4e7f622d0c66a" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:09.894456 containerd[1531]: time="2025-05-27T02:46:09.894276752Z" level=info msg="connecting to shim 7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126" address="unix:///run/containerd/s/43f12abffec04105518efd8ef19c941f98c7cbb63be216c5b935502ab56b1d9d" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:09.904115 systemd[1]: Started cri-containerd-e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1.scope - libcontainer container e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1. May 27 02:46:09.909615 systemd[1]: Started cri-containerd-c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700.scope - libcontainer container c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700. May 27 02:46:09.919117 systemd[1]: Started cri-containerd-7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126.scope - libcontainer container 7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126. May 27 02:46:09.950278 containerd[1531]: time="2025-05-27T02:46:09.950173094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bb10e27277d728bf1c1219e03aea99a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1\"" May 27 02:46:09.954849 containerd[1531]: time="2025-05-27T02:46:09.954748799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700\"" May 27 02:46:09.958494 containerd[1531]: time="2025-05-27T02:46:09.958431787Z" level=info msg="CreateContainer within sandbox \"e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 02:46:09.959040 containerd[1531]: time="2025-05-27T02:46:09.959002481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126\"" May 27 02:46:09.960460 containerd[1531]: time="2025-05-27T02:46:09.960434952Z" level=info msg="CreateContainer within sandbox \"c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 02:46:09.969496 containerd[1531]: time="2025-05-27T02:46:09.969466121Z" level=info msg="Container d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:09.978809 containerd[1531]: time="2025-05-27T02:46:09.978383728Z" level=info msg="Container 8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:09.981465 containerd[1531]: time="2025-05-27T02:46:09.981340328Z" level=info msg="CreateContainer within sandbox \"7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 02:46:09.985875 containerd[1531]: time="2025-05-27T02:46:09.985680871Z" level=info msg="CreateContainer within sandbox \"c177e07f7db675e1b39c12fae0d2b53a2c34ebfba03381af1253b7b2836f2700\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece\"" May 27 02:46:09.986223 containerd[1531]: time="2025-05-27T02:46:09.986181293Z" level=info msg="CreateContainer within sandbox \"e00df9613078d980eaa03af882a2a1228017b110efb0999774fe8863566c1fa1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952\"" May 27 02:46:09.986490 containerd[1531]: time="2025-05-27T02:46:09.986383636Z" level=info msg="StartContainer for \"8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece\"" May 27 02:46:09.986868 containerd[1531]: time="2025-05-27T02:46:09.986841046Z" level=info msg="StartContainer for \"d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952\"" May 27 02:46:09.987976 containerd[1531]: time="2025-05-27T02:46:09.987918997Z" level=info msg="connecting to shim 8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece" address="unix:///run/containerd/s/b1af39dbf0a4b1212d526fb571c36023b3fa604e3947f57706b4e7f622d0c66a" protocol=ttrpc version=3 May 27 02:46:09.988457 containerd[1531]: time="2025-05-27T02:46:09.988421057Z" level=info msg="connecting to shim d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952" address="unix:///run/containerd/s/2288c6155918940b7d5e06c2ec03a55511de6a49e78e21f4bf95b5d20454797c" protocol=ttrpc version=3 May 27 02:46:09.992446 containerd[1531]: time="2025-05-27T02:46:09.992409119Z" level=info msg="Container a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:10.002724 containerd[1531]: time="2025-05-27T02:46:10.002681557Z" level=info msg="CreateContainer within sandbox \"7ee61f7a7964b28ff6c798ffc0a50cddaea369a74059ba95b4d2eb415ff14126\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44\"" May 27 02:46:10.004448 containerd[1531]: time="2025-05-27T02:46:10.004412412Z" level=info msg="StartContainer for \"a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44\"" May 27 02:46:10.005576 containerd[1531]: time="2025-05-27T02:46:10.005549299Z" level=info msg="connecting to shim a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44" address="unix:///run/containerd/s/43f12abffec04105518efd8ef19c941f98c7cbb63be216c5b935502ab56b1d9d" protocol=ttrpc version=3 May 27 02:46:10.010141 systemd[1]: Started cri-containerd-8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece.scope - libcontainer container 8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece. May 27 02:46:10.011337 systemd[1]: Started cri-containerd-d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952.scope - libcontainer container d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952. May 27 02:46:10.028139 systemd[1]: Started cri-containerd-a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44.scope - libcontainer container a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44. May 27 02:46:10.063494 containerd[1531]: time="2025-05-27T02:46:10.063453423Z" level=info msg="StartContainer for \"8ca4d64d21a9696219e1749c94331c13f7488e64fdd264a1dfc2f9b88448dece\" returns successfully" May 27 02:46:10.065676 containerd[1531]: time="2025-05-27T02:46:10.065629695Z" level=info msg="StartContainer for \"d61723a2295347082237be3366660d1200f1c224f7de2f24de729731e7a9e952\" returns successfully" May 27 02:46:10.105166 containerd[1531]: time="2025-05-27T02:46:10.105127755Z" level=info msg="StartContainer for \"a4e633b7fc6505d7d306d622513c9fce6cbe3bba7909ae5bb009be8ca960af44\" returns successfully" May 27 02:46:10.128824 kubelet[2252]: I0527 02:46:10.128788 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:46:10.129245 kubelet[2252]: E0527 02:46:10.129217 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 27 02:46:10.241513 kubelet[2252]: E0527 02:46:10.241389 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:10.245427 kubelet[2252]: E0527 02:46:10.245402 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:10.249092 kubelet[2252]: E0527 02:46:10.248672 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:10.931075 kubelet[2252]: I0527 02:46:10.931031 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:46:11.252183 kubelet[2252]: E0527 02:46:11.250705 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:11.252183 kubelet[2252]: E0527 02:46:11.251026 2252 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:46:11.968184 kubelet[2252]: E0527 02:46:11.968142 2252 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 02:46:11.990095 kubelet[2252]: E0527 02:46:11.990001 2252 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1843424ac2acb60a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:46:09.210996234 +0000 UTC m=+1.010642442,LastTimestamp:2025-05-27 02:46:09.210996234 +0000 UTC m=+1.010642442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:46:12.039763 kubelet[2252]: I0527 02:46:12.039720 2252 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 02:46:12.101489 kubelet[2252]: E0527 02:46:12.101100 2252 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1843424ac346bf84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:46:09.221091204 +0000 UTC m=+1.020737412,LastTimestamp:2025-05-27 02:46:09.221091204 +0000 UTC m=+1.020737412,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:46:12.120286 kubelet[2252]: I0527 02:46:12.120243 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:46:12.125222 kubelet[2252]: E0527 02:46:12.124999 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 02:46:12.125964 kubelet[2252]: I0527 02:46:12.125403 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:46:12.127211 kubelet[2252]: E0527 02:46:12.127172 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 02:46:12.127211 kubelet[2252]: I0527 02:46:12.127199 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:46:12.128854 kubelet[2252]: E0527 02:46:12.128823 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 02:46:12.211409 kubelet[2252]: I0527 02:46:12.211370 2252 apiserver.go:52] "Watching apiserver" May 27 02:46:12.219609 kubelet[2252]: I0527 02:46:12.219488 2252 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:46:12.651496 kubelet[2252]: I0527 02:46:12.651387 2252 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:46:12.653450 kubelet[2252]: E0527 02:46:12.653421 2252 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.158594 systemd[1]: Reload requested from client PID 2534 ('systemctl') (unit session-7.scope)... May 27 02:46:14.158611 systemd[1]: Reloading... May 27 02:46:14.229981 zram_generator::config[2577]: No configuration found. May 27 02:46:14.301844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:46:14.399404 systemd[1]: Reloading finished in 240 ms. May 27 02:46:14.421382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:46:14.433913 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:46:14.434193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:14.434257 systemd[1]: kubelet.service: Consumed 1.423s CPU time, 128.1M memory peak. May 27 02:46:14.436161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:46:14.592616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:46:14.596243 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:46:14.633710 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:46:14.633710 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:46:14.633710 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:46:14.634145 kubelet[2619]: I0527 02:46:14.633716 2619 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:46:14.640962 kubelet[2619]: I0527 02:46:14.640759 2619 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 02:46:14.640962 kubelet[2619]: I0527 02:46:14.640785 2619 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:46:14.641142 kubelet[2619]: I0527 02:46:14.641125 2619 server.go:956] "Client rotation is on, will bootstrap in background" May 27 02:46:14.642749 kubelet[2619]: I0527 02:46:14.642710 2619 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 02:46:14.644909 kubelet[2619]: I0527 02:46:14.644886 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:46:14.648802 kubelet[2619]: I0527 02:46:14.648773 2619 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:46:14.651282 kubelet[2619]: I0527 02:46:14.651260 2619 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:46:14.651459 kubelet[2619]: I0527 02:46:14.651439 2619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:46:14.651612 kubelet[2619]: I0527 02:46:14.651461 2619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:46:14.651695 kubelet[2619]: I0527 02:46:14.651621 2619 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:46:14.651695 kubelet[2619]: I0527 02:46:14.651629 2619 container_manager_linux.go:303] "Creating device plugin manager" May 27 02:46:14.651695 kubelet[2619]: I0527 02:46:14.651668 2619 state_mem.go:36] "Initialized new in-memory state store" May 27 02:46:14.651809 kubelet[2619]: I0527 02:46:14.651796 2619 kubelet.go:480] "Attempting to sync node with API server" May 27 02:46:14.651809 kubelet[2619]: I0527 02:46:14.651809 2619 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:46:14.651855 kubelet[2619]: I0527 02:46:14.651830 2619 kubelet.go:386] "Adding apiserver pod source" May 27 02:46:14.651855 kubelet[2619]: I0527 02:46:14.651842 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:46:14.652967 kubelet[2619]: I0527 02:46:14.652664 2619 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:46:14.653235 kubelet[2619]: I0527 02:46:14.653215 2619 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 02:46:14.655781 kubelet[2619]: I0527 02:46:14.655749 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:46:14.655983 kubelet[2619]: I0527 02:46:14.655961 2619 server.go:1289] "Started kubelet" May 27 02:46:14.659512 kubelet[2619]: I0527 02:46:14.659454 2619 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:46:14.660119 kubelet[2619]: I0527 02:46:14.659693 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:46:14.661233 kubelet[2619]: I0527 02:46:14.660231 2619 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:46:14.661233 kubelet[2619]: I0527 02:46:14.661161 2619 server.go:317] "Adding debug handlers to kubelet server" May 27 02:46:14.664342 kubelet[2619]: I0527 02:46:14.664313 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:46:14.673665 kubelet[2619]: I0527 02:46:14.673504 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:46:14.676009 kubelet[2619]: I0527 02:46:14.675545 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:46:14.676009 kubelet[2619]: I0527 02:46:14.675576 2619 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:46:14.676009 kubelet[2619]: I0527 02:46:14.675717 2619 reconciler.go:26] "Reconciler: start to sync state" May 27 02:46:14.676834 kubelet[2619]: E0527 02:46:14.676802 2619 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:46:14.677740 kubelet[2619]: I0527 02:46:14.677267 2619 factory.go:223] Registration of the systemd container factory successfully May 27 02:46:14.677972 kubelet[2619]: I0527 02:46:14.677911 2619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:46:14.680053 kubelet[2619]: I0527 02:46:14.680027 2619 factory.go:223] Registration of the containerd container factory successfully May 27 02:46:14.682405 kubelet[2619]: I0527 02:46:14.682373 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 02:46:14.683542 kubelet[2619]: I0527 02:46:14.683515 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 02:46:14.683542 kubelet[2619]: I0527 02:46:14.683539 2619 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 02:46:14.683542 kubelet[2619]: I0527 02:46:14.683557 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:46:14.683542 kubelet[2619]: I0527 02:46:14.683563 2619 kubelet.go:2436] "Starting kubelet main sync loop" May 27 02:46:14.683542 kubelet[2619]: E0527 02:46:14.683604 2619 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710038 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710069 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710089 2619 state_mem.go:36] "Initialized new in-memory state store" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710220 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710229 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710244 2619 policy_none.go:49] "None policy: Start" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710253 2619 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710260 2619 state_mem.go:35] "Initializing new in-memory state store" May 27 02:46:14.710443 kubelet[2619]: I0527 02:46:14.710336 2619 state_mem.go:75] "Updated machine memory state" May 27 02:46:14.714145 kubelet[2619]: E0527 02:46:14.714060 2619 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 02:46:14.714352 kubelet[2619]: I0527 02:46:14.714329 2619 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:46:14.714423 kubelet[2619]: I0527 02:46:14.714349 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:46:14.715047 kubelet[2619]: I0527 02:46:14.715025 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:46:14.715753 kubelet[2619]: E0527 02:46:14.715607 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:46:14.785237 kubelet[2619]: I0527 02:46:14.785197 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:46:14.785454 kubelet[2619]: I0527 02:46:14.785438 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.786150 kubelet[2619]: I0527 02:46:14.786109 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:46:14.818651 kubelet[2619]: I0527 02:46:14.818604 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:46:14.824792 kubelet[2619]: I0527 02:46:14.824690 2619 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 02:46:14.824792 kubelet[2619]: I0527 02:46:14.824771 2619 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 02:46:14.977537 kubelet[2619]: I0527 02:46:14.977378 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.977537 kubelet[2619]: I0527 02:46:14.977422 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:14.977537 kubelet[2619]: I0527 02:46:14.977474 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.977537 kubelet[2619]: I0527 02:46:14.977517 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.977537 kubelet[2619]: I0527 02:46:14.977547 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.978063 kubelet[2619]: I0527 02:46:14.977599 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:46:14.978063 kubelet[2619]: I0527 02:46:14.977639 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 02:46:14.978063 kubelet[2619]: I0527 02:46:14.977659 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:14.978063 kubelet[2619]: I0527 02:46:14.977679 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb10e27277d728bf1c1219e03aea99a7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb10e27277d728bf1c1219e03aea99a7\") " pod="kube-system/kube-apiserver-localhost" May 27 02:46:15.266866 sudo[2660]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 02:46:15.267192 sudo[2660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 02:46:15.652087 kubelet[2619]: I0527 02:46:15.651988 2619 apiserver.go:52] "Watching apiserver" May 27 02:46:15.676037 kubelet[2619]: I0527 02:46:15.675991 2619 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:46:15.697691 kubelet[2619]: I0527 02:46:15.697074 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:46:15.702216 kubelet[2619]: E0527 02:46:15.702142 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 02:46:15.719182 sudo[2660]: pam_unix(sudo:session): session closed for user root May 27 02:46:15.745340 kubelet[2619]: I0527 02:46:15.744723 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.744705046 podStartE2EDuration="1.744705046s" podCreationTimestamp="2025-05-27 02:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:15.73499503 +0000 UTC m=+1.135495503" watchObservedRunningTime="2025-05-27 02:46:15.744705046 +0000 UTC m=+1.145205519" May 27 02:46:15.753960 kubelet[2619]: I0527 02:46:15.753895 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.753879452 podStartE2EDuration="1.753879452s" podCreationTimestamp="2025-05-27 02:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:15.746704962 +0000 UTC m=+1.147205435" watchObservedRunningTime="2025-05-27 02:46:15.753879452 +0000 UTC m=+1.154379925" May 27 02:46:15.762527 kubelet[2619]: I0527 02:46:15.762148 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.762134601 podStartE2EDuration="1.762134601s" podCreationTimestamp="2025-05-27 02:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:15.754299579 +0000 UTC m=+1.154800052" watchObservedRunningTime="2025-05-27 02:46:15.762134601 +0000 UTC m=+1.162635074" May 27 02:46:17.262779 sudo[1735]: pam_unix(sudo:session): session closed for user root May 27 02:46:17.264065 sshd[1734]: Connection closed by 10.0.0.1 port 59404 May 27 02:46:17.265249 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 02:46:17.268637 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:59404.service: Deactivated successfully. May 27 02:46:17.270658 systemd[1]: session-7.scope: Deactivated successfully. May 27 02:46:17.270830 systemd[1]: session-7.scope: Consumed 8.260s CPU time, 266.5M memory peak. May 27 02:46:17.271745 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. May 27 02:46:17.273246 systemd-logind[1506]: Removed session 7. May 27 02:46:21.181924 kubelet[2619]: I0527 02:46:21.181869 2619 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 02:46:21.185170 containerd[1531]: time="2025-05-27T02:46:21.185126547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 02:46:21.185614 kubelet[2619]: I0527 02:46:21.185425 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327707 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-cgroup\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327740 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cni-path\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327756 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-lib-modules\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327773 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-kernel\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327787 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-hubble-tls\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328000 kubelet[2619]: I0527 02:46:22.327800 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-run\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328559 kubelet[2619]: I0527 02:46:22.327815 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bb19591-0156-4587-8775-6a9bfc711d3a-kube-proxy\") pod \"kube-proxy-bw6sv\" (UID: \"0bb19591-0156-4587-8775-6a9bfc711d3a\") " pod="kube-system/kube-proxy-bw6sv" May 27 02:46:22.328559 kubelet[2619]: I0527 02:46:22.327834 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qh8\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-kube-api-access-q6qh8\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328559 kubelet[2619]: I0527 02:46:22.327852 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bb19591-0156-4587-8775-6a9bfc711d3a-xtables-lock\") pod \"kube-proxy-bw6sv\" (UID: \"0bb19591-0156-4587-8775-6a9bfc711d3a\") " pod="kube-system/kube-proxy-bw6sv" May 27 02:46:22.328559 kubelet[2619]: I0527 02:46:22.327871 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bb19591-0156-4587-8775-6a9bfc711d3a-lib-modules\") pod \"kube-proxy-bw6sv\" (UID: \"0bb19591-0156-4587-8775-6a9bfc711d3a\") " pod="kube-system/kube-proxy-bw6sv" May 27 02:46:22.328559 kubelet[2619]: I0527 02:46:22.327891 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lhm\" (UniqueName: \"kubernetes.io/projected/0bb19591-0156-4587-8775-6a9bfc711d3a-kube-api-access-c9lhm\") pod \"kube-proxy-bw6sv\" (UID: \"0bb19591-0156-4587-8775-6a9bfc711d3a\") " pod="kube-system/kube-proxy-bw6sv" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.327904 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-etc-cni-netd\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.327917 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-xtables-lock\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.327947 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69986cb3-a164-44c8-933d-f426f6a74ce9-clustermesh-secrets\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.327968 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-bpf-maps\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.327994 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-hostproc\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328659 kubelet[2619]: I0527 02:46:22.328008 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-config-path\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328767 kubelet[2619]: I0527 02:46:22.328024 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-net\") pod \"cilium-g6x7q\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " pod="kube-system/cilium-g6x7q" May 27 02:46:22.328832 systemd[1]: Created slice kubepods-besteffort-pod0bb19591_0156_4587_8775_6a9bfc711d3a.slice - libcontainer container kubepods-besteffort-pod0bb19591_0156_4587_8775_6a9bfc711d3a.slice. May 27 02:46:22.346125 systemd[1]: Created slice kubepods-burstable-pod69986cb3_a164_44c8_933d_f426f6a74ce9.slice - libcontainer container kubepods-burstable-pod69986cb3_a164_44c8_933d_f426f6a74ce9.slice. May 27 02:46:22.402123 systemd[1]: Created slice kubepods-besteffort-poddcd1a841_5711_469c_b90b_74fc72f5427d.slice - libcontainer container kubepods-besteffort-poddcd1a841_5711_469c_b90b_74fc72f5427d.slice. May 27 02:46:22.429377 kubelet[2619]: I0527 02:46:22.429310 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcd1a841-5711-469c-b90b-74fc72f5427d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-n4frb\" (UID: \"dcd1a841-5711-469c-b90b-74fc72f5427d\") " pod="kube-system/cilium-operator-6c4d7847fc-n4frb" May 27 02:46:22.429486 kubelet[2619]: I0527 02:46:22.429395 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6vg\" (UniqueName: \"kubernetes.io/projected/dcd1a841-5711-469c-b90b-74fc72f5427d-kube-api-access-dn6vg\") pod \"cilium-operator-6c4d7847fc-n4frb\" (UID: \"dcd1a841-5711-469c-b90b-74fc72f5427d\") " pod="kube-system/cilium-operator-6c4d7847fc-n4frb" May 27 02:46:22.647200 containerd[1531]: time="2025-05-27T02:46:22.647069803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bw6sv,Uid:0bb19591-0156-4587-8775-6a9bfc711d3a,Namespace:kube-system,Attempt:0,}" May 27 02:46:22.665288 containerd[1531]: time="2025-05-27T02:46:22.665237306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6x7q,Uid:69986cb3-a164-44c8-933d-f426f6a74ce9,Namespace:kube-system,Attempt:0,}" May 27 02:46:22.679173 containerd[1531]: time="2025-05-27T02:46:22.679133276Z" level=info msg="connecting to shim 32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0" address="unix:///run/containerd/s/84eb0d903f76f3ab9ab0835aa5b1a359a9fd8cad4578c986e89b55dd20e143a6" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:22.686958 containerd[1531]: time="2025-05-27T02:46:22.686896611Z" level=info msg="connecting to shim bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:22.707090 containerd[1531]: time="2025-05-27T02:46:22.707041337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n4frb,Uid:dcd1a841-5711-469c-b90b-74fc72f5427d,Namespace:kube-system,Attempt:0,}" May 27 02:46:22.712197 systemd[1]: Started cri-containerd-32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0.scope - libcontainer container 32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0. May 27 02:46:22.716672 systemd[1]: Started cri-containerd-bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11.scope - libcontainer container bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11. May 27 02:46:22.767992 containerd[1531]: time="2025-05-27T02:46:22.767915683Z" level=info msg="connecting to shim 2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201" address="unix:///run/containerd/s/7b09dbecf4393f84d4e5dfe4743fd841b0efd0f20edbad55b889fa6e619ddce3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:22.771541 containerd[1531]: time="2025-05-27T02:46:22.771500487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6x7q,Uid:69986cb3-a164-44c8-933d-f426f6a74ce9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\"" May 27 02:46:22.772802 containerd[1531]: time="2025-05-27T02:46:22.772758822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bw6sv,Uid:0bb19591-0156-4587-8775-6a9bfc711d3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0\"" May 27 02:46:22.783375 containerd[1531]: time="2025-05-27T02:46:22.783332151Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 02:46:22.793970 containerd[1531]: time="2025-05-27T02:46:22.793529596Z" level=info msg="CreateContainer within sandbox \"32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 02:46:22.799216 systemd[1]: Started cri-containerd-2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201.scope - libcontainer container 2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201. May 27 02:46:22.807306 containerd[1531]: time="2025-05-27T02:46:22.807259764Z" level=info msg="Container efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:22.815834 containerd[1531]: time="2025-05-27T02:46:22.815783589Z" level=info msg="CreateContainer within sandbox \"32d61dbfc4d08b542d74139e80746043e7ae2635d8fd13c3b7836292133e8af0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c\"" May 27 02:46:22.816636 containerd[1531]: time="2025-05-27T02:46:22.816603719Z" level=info msg="StartContainer for \"efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c\"" May 27 02:46:22.819355 containerd[1531]: time="2025-05-27T02:46:22.819319592Z" level=info msg="connecting to shim efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c" address="unix:///run/containerd/s/84eb0d903f76f3ab9ab0835aa5b1a359a9fd8cad4578c986e89b55dd20e143a6" protocol=ttrpc version=3 May 27 02:46:22.833770 containerd[1531]: time="2025-05-27T02:46:22.833699168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n4frb,Uid:dcd1a841-5711-469c-b90b-74fc72f5427d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\"" May 27 02:46:22.848126 systemd[1]: Started cri-containerd-efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c.scope - libcontainer container efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c. May 27 02:46:22.885306 containerd[1531]: time="2025-05-27T02:46:22.885076237Z" level=info msg="StartContainer for \"efe5f1febb71385cd5d3ef41ccaed4884bf4d59f5adc8c9b82718f22aa62d68c\" returns successfully" May 27 02:46:23.723329 kubelet[2619]: I0527 02:46:23.723170 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bw6sv" podStartSLOduration=1.723150876 podStartE2EDuration="1.723150876s" podCreationTimestamp="2025-05-27 02:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:23.722255386 +0000 UTC m=+9.122755859" watchObservedRunningTime="2025-05-27 02:46:23.723150876 +0000 UTC m=+9.123651349" May 27 02:46:26.445704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678056399.mount: Deactivated successfully. May 27 02:46:27.463068 update_engine[1515]: I20250527 02:46:27.462993 1515 update_attempter.cc:509] Updating boot flags... May 27 02:46:30.637311 containerd[1531]: time="2025-05-27T02:46:30.636909486Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:30.637716 containerd[1531]: time="2025-05-27T02:46:30.637679332Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 02:46:30.638388 containerd[1531]: time="2025-05-27T02:46:30.638350017Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:30.639847 containerd[1531]: time="2025-05-27T02:46:30.639751749Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.856377677s" May 27 02:46:30.639847 containerd[1531]: time="2025-05-27T02:46:30.639792029Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 02:46:30.642067 containerd[1531]: time="2025-05-27T02:46:30.642038807Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 02:46:30.646211 containerd[1531]: time="2025-05-27T02:46:30.646169561Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:46:30.653497 containerd[1531]: time="2025-05-27T02:46:30.653453700Z" level=info msg="Container a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:30.656632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678591687.mount: Deactivated successfully. May 27 02:46:30.669643 containerd[1531]: time="2025-05-27T02:46:30.669592431Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\"" May 27 02:46:30.670346 containerd[1531]: time="2025-05-27T02:46:30.670220036Z" level=info msg="StartContainer for \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\"" May 27 02:46:30.671094 containerd[1531]: time="2025-05-27T02:46:30.671012322Z" level=info msg="connecting to shim a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" protocol=ttrpc version=3 May 27 02:46:30.714164 systemd[1]: Started cri-containerd-a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe.scope - libcontainer container a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe. May 27 02:46:30.759450 containerd[1531]: time="2025-05-27T02:46:30.759387320Z" level=info msg="StartContainer for \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" returns successfully" May 27 02:46:30.810802 systemd[1]: cri-containerd-a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe.scope: Deactivated successfully. May 27 02:46:30.837766 containerd[1531]: time="2025-05-27T02:46:30.837711275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" id:\"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" pid:3062 exited_at:{seconds:1748313990 nanos:831065581}" May 27 02:46:30.838523 containerd[1531]: time="2025-05-27T02:46:30.838480522Z" level=info msg="received exit event container_id:\"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" id:\"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" pid:3062 exited_at:{seconds:1748313990 nanos:831065581}" May 27 02:46:30.870430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe-rootfs.mount: Deactivated successfully. May 27 02:46:31.745032 containerd[1531]: time="2025-05-27T02:46:31.744983795Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:46:31.763914 containerd[1531]: time="2025-05-27T02:46:31.763868821Z" level=info msg="Container d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:31.770928 containerd[1531]: time="2025-05-27T02:46:31.770883916Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\"" May 27 02:46:31.773831 containerd[1531]: time="2025-05-27T02:46:31.773798898Z" level=info msg="StartContainer for \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\"" May 27 02:46:31.774779 containerd[1531]: time="2025-05-27T02:46:31.774713865Z" level=info msg="connecting to shim d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" protocol=ttrpc version=3 May 27 02:46:31.797152 systemd[1]: Started cri-containerd-d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b.scope - libcontainer container d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b. May 27 02:46:31.833245 containerd[1531]: time="2025-05-27T02:46:31.833198718Z" level=info msg="StartContainer for \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" returns successfully" May 27 02:46:31.854276 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:46:31.854743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:46:31.855286 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 02:46:31.857122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:46:31.859361 containerd[1531]: time="2025-05-27T02:46:31.858074030Z" level=info msg="received exit event container_id:\"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" id:\"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" pid:3106 exited_at:{seconds:1748313991 nanos:857791068}" May 27 02:46:31.859361 containerd[1531]: time="2025-05-27T02:46:31.858210991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" id:\"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" pid:3106 exited_at:{seconds:1748313991 nanos:857791068}" May 27 02:46:31.858607 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:46:31.859021 systemd[1]: cri-containerd-d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b.scope: Deactivated successfully. May 27 02:46:31.882717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:46:32.451846 containerd[1531]: time="2025-05-27T02:46:32.451787941Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:32.453291 containerd[1531]: time="2025-05-27T02:46:32.453255632Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 02:46:32.454269 containerd[1531]: time="2025-05-27T02:46:32.454213719Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:46:32.455673 containerd[1531]: time="2025-05-27T02:46:32.455535089Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.813347921s" May 27 02:46:32.455673 containerd[1531]: time="2025-05-27T02:46:32.455579409Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 02:46:32.460683 containerd[1531]: time="2025-05-27T02:46:32.460611326Z" level=info msg="CreateContainer within sandbox \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 02:46:32.486369 containerd[1531]: time="2025-05-27T02:46:32.486227995Z" level=info msg="Container 20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:32.498510 containerd[1531]: time="2025-05-27T02:46:32.498461925Z" level=info msg="CreateContainer within sandbox \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\"" May 27 02:46:32.499046 containerd[1531]: time="2025-05-27T02:46:32.499009409Z" level=info msg="StartContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\"" May 27 02:46:32.499912 containerd[1531]: time="2025-05-27T02:46:32.499879376Z" level=info msg="connecting to shim 20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07" address="unix:///run/containerd/s/7b09dbecf4393f84d4e5dfe4743fd841b0efd0f20edbad55b889fa6e619ddce3" protocol=ttrpc version=3 May 27 02:46:32.524159 systemd[1]: Started cri-containerd-20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07.scope - libcontainer container 20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07. May 27 02:46:32.548967 containerd[1531]: time="2025-05-27T02:46:32.548916737Z" level=info msg="StartContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" returns successfully" May 27 02:46:32.757658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b-rootfs.mount: Deactivated successfully. May 27 02:46:32.798041 containerd[1531]: time="2025-05-27T02:46:32.797994215Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:46:32.801218 kubelet[2619]: I0527 02:46:32.801094 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-n4frb" podStartSLOduration=1.180536177 podStartE2EDuration="10.801075038s" podCreationTimestamp="2025-05-27 02:46:22 +0000 UTC" firstStartedPulling="2025-05-27 02:46:22.835851314 +0000 UTC m=+8.236351747" lastFinishedPulling="2025-05-27 02:46:32.456390135 +0000 UTC m=+17.856890608" observedRunningTime="2025-05-27 02:46:32.773075431 +0000 UTC m=+18.173575904" watchObservedRunningTime="2025-05-27 02:46:32.801075038 +0000 UTC m=+18.201575511" May 27 02:46:32.815899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893971565.mount: Deactivated successfully. May 27 02:46:32.822735 containerd[1531]: time="2025-05-27T02:46:32.822684917Z" level=info msg="Container cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:32.831313 containerd[1531]: time="2025-05-27T02:46:32.831261180Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\"" May 27 02:46:32.832064 containerd[1531]: time="2025-05-27T02:46:32.832029466Z" level=info msg="StartContainer for \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\"" May 27 02:46:32.833611 containerd[1531]: time="2025-05-27T02:46:32.833574877Z" level=info msg="connecting to shim cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" protocol=ttrpc version=3 May 27 02:46:32.853148 systemd[1]: Started cri-containerd-cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6.scope - libcontainer container cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6. May 27 02:46:32.925821 containerd[1531]: time="2025-05-27T02:46:32.925685997Z" level=info msg="StartContainer for \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" returns successfully" May 27 02:46:32.933313 systemd[1]: cri-containerd-cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6.scope: Deactivated successfully. May 27 02:46:32.933840 systemd[1]: cri-containerd-cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6.scope: Consumed 50ms CPU time, 4.7M memory peak, 2M read from disk. May 27 02:46:32.941652 containerd[1531]: time="2025-05-27T02:46:32.941559394Z" level=info msg="received exit event container_id:\"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" id:\"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" pid:3205 exited_at:{seconds:1748313992 nanos:941316312}" May 27 02:46:32.941880 containerd[1531]: time="2025-05-27T02:46:32.941672995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" id:\"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" pid:3205 exited_at:{seconds:1748313992 nanos:941316312}" May 27 02:46:33.762902 containerd[1531]: time="2025-05-27T02:46:33.762241953Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:46:33.772105 containerd[1531]: time="2025-05-27T02:46:33.772056622Z" level=info msg="Container b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:33.781701 containerd[1531]: time="2025-05-27T02:46:33.781644049Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\"" May 27 02:46:33.783462 containerd[1531]: time="2025-05-27T02:46:33.783426782Z" level=info msg="StartContainer for \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\"" May 27 02:46:33.785971 containerd[1531]: time="2025-05-27T02:46:33.784444829Z" level=info msg="connecting to shim b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" protocol=ttrpc version=3 May 27 02:46:33.805090 systemd[1]: Started cri-containerd-b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba.scope - libcontainer container b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba. May 27 02:46:33.830055 systemd[1]: cri-containerd-b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba.scope: Deactivated successfully. May 27 02:46:33.832623 containerd[1531]: time="2025-05-27T02:46:33.832403167Z" level=info msg="received exit event container_id:\"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" id:\"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" pid:3244 exited_at:{seconds:1748313993 nanos:830203351}" May 27 02:46:33.833081 containerd[1531]: time="2025-05-27T02:46:33.833056891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" id:\"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" pid:3244 exited_at:{seconds:1748313993 nanos:830203351}" May 27 02:46:33.839925 containerd[1531]: time="2025-05-27T02:46:33.839891100Z" level=info msg="StartContainer for \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" returns successfully" May 27 02:46:33.863486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba-rootfs.mount: Deactivated successfully. May 27 02:46:34.770131 containerd[1531]: time="2025-05-27T02:46:34.770081368Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:46:34.787965 containerd[1531]: time="2025-05-27T02:46:34.787592405Z" level=info msg="Container b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:34.790629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527138318.mount: Deactivated successfully. May 27 02:46:34.794929 containerd[1531]: time="2025-05-27T02:46:34.794866694Z" level=info msg="CreateContainer within sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\"" May 27 02:46:34.795653 containerd[1531]: time="2025-05-27T02:46:34.795632660Z" level=info msg="StartContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\"" May 27 02:46:34.798720 containerd[1531]: time="2025-05-27T02:46:34.798691160Z" level=info msg="connecting to shim b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320" address="unix:///run/containerd/s/8a4556be14a7bd8e8f717abc47f8f241f9c05ae352af68936f4927992888dadf" protocol=ttrpc version=3 May 27 02:46:34.834159 systemd[1]: Started cri-containerd-b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320.scope - libcontainer container b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320. May 27 02:46:34.880941 containerd[1531]: time="2025-05-27T02:46:34.880887553Z" level=info msg="StartContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" returns successfully" May 27 02:46:35.048067 containerd[1531]: time="2025-05-27T02:46:35.047610821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" id:\"0f30b11ffc339b9ff5b51f1aea90656452703452edcf743959721850e9d8c251\" pid:3313 exited_at:{seconds:1748313995 nanos:47349059}" May 27 02:46:35.137854 kubelet[2619]: I0527 02:46:35.137808 2619 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 02:46:35.202273 systemd[1]: Created slice kubepods-burstable-pod5917ee4c_1469_4635_89d9_08b5dec96401.slice - libcontainer container kubepods-burstable-pod5917ee4c_1469_4635_89d9_08b5dec96401.slice. May 27 02:46:35.208490 systemd[1]: Created slice kubepods-burstable-pod629457c6_32c8_494d_be3d_9b89820bfa39.slice - libcontainer container kubepods-burstable-pod629457c6_32c8_494d_be3d_9b89820bfa39.slice. May 27 02:46:35.224610 kubelet[2619]: I0527 02:46:35.224562 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/629457c6-32c8-494d-be3d-9b89820bfa39-config-volume\") pod \"coredns-674b8bbfcf-cxblk\" (UID: \"629457c6-32c8-494d-be3d-9b89820bfa39\") " pod="kube-system/coredns-674b8bbfcf-cxblk" May 27 02:46:35.224610 kubelet[2619]: I0527 02:46:35.224607 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629nb\" (UniqueName: \"kubernetes.io/projected/629457c6-32c8-494d-be3d-9b89820bfa39-kube-api-access-629nb\") pod \"coredns-674b8bbfcf-cxblk\" (UID: \"629457c6-32c8-494d-be3d-9b89820bfa39\") " pod="kube-system/coredns-674b8bbfcf-cxblk" May 27 02:46:35.224750 kubelet[2619]: I0527 02:46:35.224633 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5917ee4c-1469-4635-89d9-08b5dec96401-config-volume\") pod \"coredns-674b8bbfcf-j7qpg\" (UID: \"5917ee4c-1469-4635-89d9-08b5dec96401\") " pod="kube-system/coredns-674b8bbfcf-j7qpg" May 27 02:46:35.224750 kubelet[2619]: I0527 02:46:35.224650 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmcz\" (UniqueName: \"kubernetes.io/projected/5917ee4c-1469-4635-89d9-08b5dec96401-kube-api-access-csmcz\") pod \"coredns-674b8bbfcf-j7qpg\" (UID: \"5917ee4c-1469-4635-89d9-08b5dec96401\") " pod="kube-system/coredns-674b8bbfcf-j7qpg" May 27 02:46:35.508009 containerd[1531]: time="2025-05-27T02:46:35.507929102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7qpg,Uid:5917ee4c-1469-4635-89d9-08b5dec96401,Namespace:kube-system,Attempt:0,}" May 27 02:46:35.512762 containerd[1531]: time="2025-05-27T02:46:35.512726413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cxblk,Uid:629457c6-32c8-494d-be3d-9b89820bfa39,Namespace:kube-system,Attempt:0,}" May 27 02:46:35.783085 kubelet[2619]: I0527 02:46:35.782850 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6x7q" podStartSLOduration=5.923953731 podStartE2EDuration="13.78283219s" podCreationTimestamp="2025-05-27 02:46:22 +0000 UTC" firstStartedPulling="2025-05-27 02:46:22.782999667 +0000 UTC m=+8.183500140" lastFinishedPulling="2025-05-27 02:46:30.641878126 +0000 UTC m=+16.042378599" observedRunningTime="2025-05-27 02:46:35.782083065 +0000 UTC m=+21.182583618" watchObservedRunningTime="2025-05-27 02:46:35.78283219 +0000 UTC m=+21.183332663" May 27 02:46:37.156048 systemd-networkd[1437]: cilium_host: Link UP May 27 02:46:37.157434 systemd-networkd[1437]: cilium_net: Link UP May 27 02:46:37.157607 systemd-networkd[1437]: cilium_host: Gained carrier May 27 02:46:37.157717 systemd-networkd[1437]: cilium_net: Gained carrier May 27 02:46:37.236301 systemd-networkd[1437]: cilium_vxlan: Link UP May 27 02:46:37.236309 systemd-networkd[1437]: cilium_vxlan: Gained carrier May 27 02:46:37.283143 systemd-networkd[1437]: cilium_net: Gained IPv6LL May 27 02:46:37.537967 kernel: NET: Registered PF_ALG protocol family May 27 02:46:37.898191 systemd-networkd[1437]: cilium_host: Gained IPv6LL May 27 02:46:38.118100 systemd-networkd[1437]: lxc_health: Link UP May 27 02:46:38.119269 systemd-networkd[1437]: lxc_health: Gained carrier May 27 02:46:38.590482 systemd-networkd[1437]: lxc87411b584f49: Link UP May 27 02:46:38.598013 systemd-networkd[1437]: lxca2da1c3f0095: Link UP May 27 02:46:38.606965 kernel: eth0: renamed from tmp3f093 May 27 02:46:38.607299 kernel: eth0: renamed from tmp5774b May 27 02:46:38.607587 systemd-networkd[1437]: lxca2da1c3f0095: Gained carrier May 27 02:46:38.608643 systemd-networkd[1437]: lxc87411b584f49: Gained carrier May 27 02:46:39.306082 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL May 27 02:46:39.690140 systemd-networkd[1437]: lxca2da1c3f0095: Gained IPv6LL May 27 02:46:39.946275 systemd-networkd[1437]: lxc87411b584f49: Gained IPv6LL May 27 02:46:40.074210 systemd-networkd[1437]: lxc_health: Gained IPv6LL May 27 02:46:42.226945 containerd[1531]: time="2025-05-27T02:46:42.226764280Z" level=info msg="connecting to shim 3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919" address="unix:///run/containerd/s/e12b6ffecbb8e50476d5f754f0f13d05f7786f3ecd3579f5db560d19618ae6a8" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:42.228002 containerd[1531]: time="2025-05-27T02:46:42.227970686Z" level=info msg="connecting to shim 5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9" address="unix:///run/containerd/s/7b465859fb7c46cbc26a725b81430fafbe10cb8da65790597487782b9d767fb1" namespace=k8s.io protocol=ttrpc version=3 May 27 02:46:42.262116 systemd[1]: Started cri-containerd-3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919.scope - libcontainer container 3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919. May 27 02:46:42.263338 systemd[1]: Started cri-containerd-5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9.scope - libcontainer container 5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9. May 27 02:46:42.274895 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:46:42.277839 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:46:42.298961 containerd[1531]: time="2025-05-27T02:46:42.298823787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cxblk,Uid:629457c6-32c8-494d-be3d-9b89820bfa39,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919\"" May 27 02:46:42.301109 containerd[1531]: time="2025-05-27T02:46:42.301079518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7qpg,Uid:5917ee4c-1469-4635-89d9-08b5dec96401,Namespace:kube-system,Attempt:0,} returns sandbox id \"5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9\"" May 27 02:46:42.306219 containerd[1531]: time="2025-05-27T02:46:42.306123903Z" level=info msg="CreateContainer within sandbox \"3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:46:42.307720 containerd[1531]: time="2025-05-27T02:46:42.307690070Z" level=info msg="CreateContainer within sandbox \"5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:46:42.316214 containerd[1531]: time="2025-05-27T02:46:42.316169431Z" level=info msg="Container 1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:42.318483 containerd[1531]: time="2025-05-27T02:46:42.317973280Z" level=info msg="Container e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f: CDI devices from CRI Config.CDIDevices: []" May 27 02:46:42.324013 containerd[1531]: time="2025-05-27T02:46:42.323971349Z" level=info msg="CreateContainer within sandbox \"5774bc38163139fe43bf793107f18e5dc24e4cd7c34dcbc969d1dfebf731c2b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f\"" May 27 02:46:42.324423 containerd[1531]: time="2025-05-27T02:46:42.324390311Z" level=info msg="CreateContainer within sandbox \"3f093bf26dbe3af0d924c60f9d10877f298547153bfc59785e5fbf1ed73a4919\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f\"" May 27 02:46:42.324568 containerd[1531]: time="2025-05-27T02:46:42.324541751Z" level=info msg="StartContainer for \"e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f\"" May 27 02:46:42.325351 containerd[1531]: time="2025-05-27T02:46:42.325324395Z" level=info msg="connecting to shim e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f" address="unix:///run/containerd/s/7b465859fb7c46cbc26a725b81430fafbe10cb8da65790597487782b9d767fb1" protocol=ttrpc version=3 May 27 02:46:42.325686 containerd[1531]: time="2025-05-27T02:46:42.325585316Z" level=info msg="StartContainer for \"1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f\"" May 27 02:46:42.326808 containerd[1531]: time="2025-05-27T02:46:42.326695002Z" level=info msg="connecting to shim 1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f" address="unix:///run/containerd/s/e12b6ffecbb8e50476d5f754f0f13d05f7786f3ecd3579f5db560d19618ae6a8" protocol=ttrpc version=3 May 27 02:46:42.351097 systemd[1]: Started cri-containerd-1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f.scope - libcontainer container 1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f. May 27 02:46:42.352048 systemd[1]: Started cri-containerd-e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f.scope - libcontainer container e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f. May 27 02:46:42.385104 containerd[1531]: time="2025-05-27T02:46:42.381615027Z" level=info msg="StartContainer for \"1e47dd816e7fe800ee3eb91d9c286f61259fa46ffb76947c86adbb769b514b5f\" returns successfully" May 27 02:46:42.385104 containerd[1531]: time="2025-05-27T02:46:42.384339840Z" level=info msg="StartContainer for \"e005122cb3bcda1fd9f2d2a7dd3ca95b9c6a05cb0eaaf9a49e78fe7d90cbf46f\" returns successfully" May 27 02:46:42.855450 kubelet[2619]: I0527 02:46:42.855373 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j7qpg" podStartSLOduration=20.855355072000002 podStartE2EDuration="20.855355072s" podCreationTimestamp="2025-05-27 02:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:42.855330072 +0000 UTC m=+28.255830585" watchObservedRunningTime="2025-05-27 02:46:42.855355072 +0000 UTC m=+28.255855505" May 27 02:46:43.241642 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:53942.service - OpenSSH per-connection server daemon (10.0.0.1:53942). May 27 02:46:43.303604 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 53942 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:43.304916 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:43.309429 systemd-logind[1506]: New session 8 of user core. May 27 02:46:43.320096 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 02:46:43.440977 sshd[3974]: Connection closed by 10.0.0.1 port 53942 May 27 02:46:43.441490 sshd-session[3972]: pam_unix(sshd:session): session closed for user core May 27 02:46:43.444755 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:53942.service: Deactivated successfully. May 27 02:46:43.446266 systemd[1]: session-8.scope: Deactivated successfully. May 27 02:46:43.448877 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. May 27 02:46:43.449736 systemd-logind[1506]: Removed session 8. May 27 02:46:45.856703 kubelet[2619]: I0527 02:46:45.856664 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:46:45.872683 kubelet[2619]: I0527 02:46:45.872607 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cxblk" podStartSLOduration=23.872592721 podStartE2EDuration="23.872592721s" podCreationTimestamp="2025-05-27 02:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:46:42.877384139 +0000 UTC m=+28.277884612" watchObservedRunningTime="2025-05-27 02:46:45.872592721 +0000 UTC m=+31.273093194" May 27 02:46:48.453863 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:53946.service - OpenSSH per-connection server daemon (10.0.0.1:53946). May 27 02:46:48.515349 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 53946 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:48.519692 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:48.525321 systemd-logind[1506]: New session 9 of user core. May 27 02:46:48.536377 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 02:46:48.678572 sshd[3997]: Connection closed by 10.0.0.1 port 53946 May 27 02:46:48.677700 sshd-session[3995]: pam_unix(sshd:session): session closed for user core May 27 02:46:48.684083 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:53946.service: Deactivated successfully. May 27 02:46:48.686964 systemd[1]: session-9.scope: Deactivated successfully. May 27 02:46:48.691738 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. May 27 02:46:48.694397 systemd-logind[1506]: Removed session 9. May 27 02:46:53.689169 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:35714.service - OpenSSH per-connection server daemon (10.0.0.1:35714). May 27 02:46:53.752203 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 35714 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:53.753342 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:53.757717 systemd-logind[1506]: New session 10 of user core. May 27 02:46:53.768090 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 02:46:53.878598 sshd[4021]: Connection closed by 10.0.0.1 port 35714 May 27 02:46:53.878512 sshd-session[4019]: pam_unix(sshd:session): session closed for user core May 27 02:46:53.881452 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:35714.service: Deactivated successfully. May 27 02:46:53.883056 systemd[1]: session-10.scope: Deactivated successfully. May 27 02:46:53.885052 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. May 27 02:46:53.886353 systemd-logind[1506]: Removed session 10. May 27 02:46:58.900804 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:35718.service - OpenSSH per-connection server daemon (10.0.0.1:35718). May 27 02:46:58.961269 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 35718 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:58.962628 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:58.966984 systemd-logind[1506]: New session 11 of user core. May 27 02:46:58.976094 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 02:46:59.114249 sshd[4037]: Connection closed by 10.0.0.1 port 35718 May 27 02:46:59.114850 sshd-session[4035]: pam_unix(sshd:session): session closed for user core May 27 02:46:59.125863 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:35718.service: Deactivated successfully. May 27 02:46:59.128957 systemd[1]: session-11.scope: Deactivated successfully. May 27 02:46:59.134409 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. May 27 02:46:59.138421 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:35726.service - OpenSSH per-connection server daemon (10.0.0.1:35726). May 27 02:46:59.140426 systemd-logind[1506]: Removed session 11. May 27 02:46:59.194292 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 35726 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:59.195541 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:59.200119 systemd-logind[1506]: New session 12 of user core. May 27 02:46:59.209125 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 02:46:59.385035 sshd[4054]: Connection closed by 10.0.0.1 port 35726 May 27 02:46:59.385618 sshd-session[4051]: pam_unix(sshd:session): session closed for user core May 27 02:46:59.401098 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:35726.service: Deactivated successfully. May 27 02:46:59.403253 systemd[1]: session-12.scope: Deactivated successfully. May 27 02:46:59.404062 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. May 27 02:46:59.406369 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:35730.service - OpenSSH per-connection server daemon (10.0.0.1:35730). May 27 02:46:59.407347 systemd-logind[1506]: Removed session 12. May 27 02:46:59.461861 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 35730 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:46:59.463184 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:46:59.467549 systemd-logind[1506]: New session 13 of user core. May 27 02:46:59.481297 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 02:46:59.598040 sshd[4068]: Connection closed by 10.0.0.1 port 35730 May 27 02:46:59.598358 sshd-session[4066]: pam_unix(sshd:session): session closed for user core May 27 02:46:59.601614 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:35730.service: Deactivated successfully. May 27 02:46:59.603550 systemd[1]: session-13.scope: Deactivated successfully. May 27 02:46:59.605974 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. May 27 02:46:59.606841 systemd-logind[1506]: Removed session 13. May 27 02:47:04.617321 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). May 27 02:47:04.706310 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:04.708881 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:04.721248 systemd-logind[1506]: New session 14 of user core. May 27 02:47:04.729150 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 02:47:04.854363 sshd[4084]: Connection closed by 10.0.0.1 port 60344 May 27 02:47:04.855159 sshd-session[4082]: pam_unix(sshd:session): session closed for user core May 27 02:47:04.859253 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:60344.service: Deactivated successfully. May 27 02:47:04.861337 systemd[1]: session-14.scope: Deactivated successfully. May 27 02:47:04.862578 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. May 27 02:47:04.864781 systemd-logind[1506]: Removed session 14. May 27 02:47:09.871133 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:60350.service - OpenSSH per-connection server daemon (10.0.0.1:60350). May 27 02:47:09.925791 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 60350 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:09.927138 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:09.931683 systemd-logind[1506]: New session 15 of user core. May 27 02:47:09.939125 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 02:47:10.046886 sshd[4102]: Connection closed by 10.0.0.1 port 60350 May 27 02:47:10.047447 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 27 02:47:10.059314 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:60350.service: Deactivated successfully. May 27 02:47:10.060802 systemd[1]: session-15.scope: Deactivated successfully. May 27 02:47:10.061459 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. May 27 02:47:10.063377 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). May 27 02:47:10.064204 systemd-logind[1506]: Removed session 15. May 27 02:47:10.117718 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:10.119707 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:10.123908 systemd-logind[1506]: New session 16 of user core. May 27 02:47:10.134119 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 02:47:10.329870 sshd[4117]: Connection closed by 10.0.0.1 port 60352 May 27 02:47:10.330621 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 27 02:47:10.344237 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:60352.service: Deactivated successfully. May 27 02:47:10.347160 systemd[1]: session-16.scope: Deactivated successfully. May 27 02:47:10.347859 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. May 27 02:47:10.350522 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:60368.service - OpenSSH per-connection server daemon (10.0.0.1:60368). May 27 02:47:10.351128 systemd-logind[1506]: Removed session 16. May 27 02:47:10.405860 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 60368 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:10.407224 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:10.411033 systemd-logind[1506]: New session 17 of user core. May 27 02:47:10.419118 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 02:47:11.111377 sshd[4131]: Connection closed by 10.0.0.1 port 60368 May 27 02:47:11.111678 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 27 02:47:11.123877 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:60368.service: Deactivated successfully. May 27 02:47:11.128065 systemd[1]: session-17.scope: Deactivated successfully. May 27 02:47:11.129390 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. May 27 02:47:11.133332 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:60382.service - OpenSSH per-connection server daemon (10.0.0.1:60382). May 27 02:47:11.134054 systemd-logind[1506]: Removed session 17. May 27 02:47:11.189124 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:11.190361 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:11.194047 systemd-logind[1506]: New session 18 of user core. May 27 02:47:11.206096 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 02:47:11.425198 sshd[4153]: Connection closed by 10.0.0.1 port 60382 May 27 02:47:11.424597 sshd-session[4151]: pam_unix(sshd:session): session closed for user core May 27 02:47:11.437359 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:60382.service: Deactivated successfully. May 27 02:47:11.439482 systemd[1]: session-18.scope: Deactivated successfully. May 27 02:47:11.440912 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. May 27 02:47:11.443829 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:60388.service - OpenSSH per-connection server daemon (10.0.0.1:60388). May 27 02:47:11.444737 systemd-logind[1506]: Removed session 18. May 27 02:47:11.498491 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 60388 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:11.499733 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:11.503746 systemd-logind[1506]: New session 19 of user core. May 27 02:47:11.514125 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 02:47:11.624981 sshd[4166]: Connection closed by 10.0.0.1 port 60388 May 27 02:47:11.625338 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 27 02:47:11.629099 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:60388.service: Deactivated successfully. May 27 02:47:11.630745 systemd[1]: session-19.scope: Deactivated successfully. May 27 02:47:11.631586 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. May 27 02:47:11.632834 systemd-logind[1506]: Removed session 19. May 27 02:47:16.641195 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:47108.service - OpenSSH per-connection server daemon (10.0.0.1:47108). May 27 02:47:16.703892 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 47108 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:16.705079 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:16.709076 systemd-logind[1506]: New session 20 of user core. May 27 02:47:16.714069 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 02:47:16.826638 sshd[4185]: Connection closed by 10.0.0.1 port 47108 May 27 02:47:16.827164 sshd-session[4183]: pam_unix(sshd:session): session closed for user core May 27 02:47:16.832054 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:47108.service: Deactivated successfully. May 27 02:47:16.833928 systemd[1]: session-20.scope: Deactivated successfully. May 27 02:47:16.838384 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. May 27 02:47:16.839477 systemd-logind[1506]: Removed session 20. May 27 02:47:21.842086 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). May 27 02:47:21.900049 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:21.901243 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:21.905610 systemd-logind[1506]: New session 21 of user core. May 27 02:47:21.915101 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 02:47:22.024893 sshd[4201]: Connection closed by 10.0.0.1 port 47120 May 27 02:47:22.025439 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 27 02:47:22.029076 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:47120.service: Deactivated successfully. May 27 02:47:22.030860 systemd[1]: session-21.scope: Deactivated successfully. May 27 02:47:22.031641 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. May 27 02:47:22.032818 systemd-logind[1506]: Removed session 21. May 27 02:47:27.040518 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:32786.service - OpenSSH per-connection server daemon (10.0.0.1:32786). May 27 02:47:27.099270 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 32786 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:27.100640 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:27.104991 systemd-logind[1506]: New session 22 of user core. May 27 02:47:27.117196 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 02:47:27.233180 sshd[4219]: Connection closed by 10.0.0.1 port 32786 May 27 02:47:27.233733 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 27 02:47:27.248237 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:32786.service: Deactivated successfully. May 27 02:47:27.250423 systemd[1]: session-22.scope: Deactivated successfully. May 27 02:47:27.251190 systemd-logind[1506]: Session 22 logged out. Waiting for processes to exit. May 27 02:47:27.253843 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:32800.service - OpenSSH per-connection server daemon (10.0.0.1:32800). May 27 02:47:27.254578 systemd-logind[1506]: Removed session 22. May 27 02:47:27.303840 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 32800 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:27.305003 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:27.309027 systemd-logind[1506]: New session 23 of user core. May 27 02:47:27.319150 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 02:47:29.471313 containerd[1531]: time="2025-05-27T02:47:29.471230369Z" level=info msg="StopContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" with timeout 30 (s)" May 27 02:47:29.471972 containerd[1531]: time="2025-05-27T02:47:29.471928097Z" level=info msg="Stop container \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" with signal terminated" May 27 02:47:29.484156 systemd[1]: cri-containerd-20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07.scope: Deactivated successfully. May 27 02:47:29.487271 containerd[1531]: time="2025-05-27T02:47:29.487219267Z" level=info msg="received exit event container_id:\"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" id:\"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" pid:3170 exited_at:{seconds:1748314049 nanos:485703251}" May 27 02:47:29.487410 containerd[1531]: time="2025-05-27T02:47:29.487378749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" id:\"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" pid:3170 exited_at:{seconds:1748314049 nanos:485703251}" May 27 02:47:29.489630 containerd[1531]: time="2025-05-27T02:47:29.489585814Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:47:29.493602 containerd[1531]: time="2025-05-27T02:47:29.493556578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" id:\"4d24f1272dbd0710675b2a005fd0c8b9ac804c0b06e1d3509106f4973a515b1e\" pid:4256 exited_at:{seconds:1748314049 nanos:493319055}" May 27 02:47:29.495741 containerd[1531]: time="2025-05-27T02:47:29.495712762Z" level=info msg="StopContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" with timeout 2 (s)" May 27 02:47:29.496177 containerd[1531]: time="2025-05-27T02:47:29.496159607Z" level=info msg="Stop container \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" with signal terminated" May 27 02:47:29.502015 systemd-networkd[1437]: lxc_health: Link DOWN May 27 02:47:29.502039 systemd-networkd[1437]: lxc_health: Lost carrier May 27 02:47:29.523037 containerd[1531]: time="2025-05-27T02:47:29.522978146Z" level=info msg="received exit event container_id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" pid:3280 exited_at:{seconds:1748314049 nanos:522695822}" May 27 02:47:29.526023 containerd[1531]: time="2025-05-27T02:47:29.522975946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" id:\"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" pid:3280 exited_at:{seconds:1748314049 nanos:522695822}" May 27 02:47:29.523354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07-rootfs.mount: Deactivated successfully. May 27 02:47:29.525003 systemd[1]: cri-containerd-b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320.scope: Deactivated successfully. May 27 02:47:29.525327 systemd[1]: cri-containerd-b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320.scope: Consumed 6.530s CPU time, 122.1M memory peak, 156K read from disk, 12.9M written to disk. May 27 02:47:29.536099 containerd[1531]: time="2025-05-27T02:47:29.536068531Z" level=info msg="StopContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" returns successfully" May 27 02:47:29.539296 containerd[1531]: time="2025-05-27T02:47:29.539257207Z" level=info msg="StopPodSandbox for \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\"" May 27 02:47:29.544994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320-rootfs.mount: Deactivated successfully. May 27 02:47:29.548777 containerd[1531]: time="2025-05-27T02:47:29.548735432Z" level=info msg="Container to stop \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.552858 containerd[1531]: time="2025-05-27T02:47:29.552812878Z" level=info msg="StopContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" returns successfully" May 27 02:47:29.553271 containerd[1531]: time="2025-05-27T02:47:29.553248683Z" level=info msg="StopPodSandbox for \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\"" May 27 02:47:29.553340 containerd[1531]: time="2025-05-27T02:47:29.553304203Z" level=info msg="Container to stop \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.553340 containerd[1531]: time="2025-05-27T02:47:29.553316803Z" level=info msg="Container to stop \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.553340 containerd[1531]: time="2025-05-27T02:47:29.553325204Z" level=info msg="Container to stop \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.553340 containerd[1531]: time="2025-05-27T02:47:29.553332964Z" level=info msg="Container to stop \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.553340 containerd[1531]: time="2025-05-27T02:47:29.553340484Z" level=info msg="Container to stop \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:47:29.558390 systemd[1]: cri-containerd-bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11.scope: Deactivated successfully. May 27 02:47:29.559582 containerd[1531]: time="2025-05-27T02:47:29.559545993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" id:\"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" pid:2776 exit_status:137 exited_at:{seconds:1748314049 nanos:559166429}" May 27 02:47:29.560016 systemd[1]: cri-containerd-2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201.scope: Deactivated successfully. May 27 02:47:29.581403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201-rootfs.mount: Deactivated successfully. May 27 02:47:29.588506 containerd[1531]: time="2025-05-27T02:47:29.588380394Z" level=info msg="shim disconnected" id=bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11 namespace=k8s.io May 27 02:47:29.588749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11-rootfs.mount: Deactivated successfully. May 27 02:47:29.607307 containerd[1531]: time="2025-05-27T02:47:29.588417354Z" level=warning msg="cleaning up after shim disconnected" id=bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11 namespace=k8s.io May 27 02:47:29.607307 containerd[1531]: time="2025-05-27T02:47:29.607301725Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:47:29.607497 containerd[1531]: time="2025-05-27T02:47:29.589952491Z" level=info msg="shim disconnected" id=2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201 namespace=k8s.io May 27 02:47:29.607497 containerd[1531]: time="2025-05-27T02:47:29.607409046Z" level=warning msg="cleaning up after shim disconnected" id=2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201 namespace=k8s.io May 27 02:47:29.607497 containerd[1531]: time="2025-05-27T02:47:29.607442886Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:47:29.623971 containerd[1531]: time="2025-05-27T02:47:29.623558706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" id:\"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" pid:2828 exit_status:137 exited_at:{seconds:1748314049 nanos:561063450}" May 27 02:47:29.623971 containerd[1531]: time="2025-05-27T02:47:29.623720827Z" level=info msg="received exit event sandbox_id:\"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" exit_status:137 exited_at:{seconds:1748314049 nanos:561063450}" May 27 02:47:29.626038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11-shm.mount: Deactivated successfully. May 27 02:47:29.626343 containerd[1531]: time="2025-05-27T02:47:29.626159135Z" level=info msg="TearDown network for sandbox \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" successfully" May 27 02:47:29.626343 containerd[1531]: time="2025-05-27T02:47:29.626193415Z" level=info msg="StopPodSandbox for \"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" returns successfully" May 27 02:47:29.626343 containerd[1531]: time="2025-05-27T02:47:29.626280896Z" level=info msg="TearDown network for sandbox \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" successfully" May 27 02:47:29.626343 containerd[1531]: time="2025-05-27T02:47:29.626302656Z" level=info msg="StopPodSandbox for \"2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201\" returns successfully" May 27 02:47:29.626733 containerd[1531]: time="2025-05-27T02:47:29.626702901Z" level=info msg="received exit event sandbox_id:\"bc7394f40ee1706ae613d38944d60c59c0770e3266979b4534b0e7c2113dfe11\" exit_status:137 exited_at:{seconds:1748314049 nanos:559166429}" May 27 02:47:29.674851 kubelet[2619]: I0527 02:47:29.674737 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cni-path\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.674851 kubelet[2619]: I0527 02:47:29.674797 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6qh8\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-kube-api-access-q6qh8\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.674851 kubelet[2619]: I0527 02:47:29.674816 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-cgroup\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675019 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-etc-cni-netd\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675043 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69986cb3-a164-44c8-933d-f426f6a74ce9-clustermesh-secrets\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675057 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-bpf-maps\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675176 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-config-path\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675201 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-xtables-lock\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675713 kubelet[2619]: I0527 02:47:29.675216 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-kernel\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675233 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-run\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675364 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn6vg\" (UniqueName: \"kubernetes.io/projected/dcd1a841-5711-469c-b90b-74fc72f5427d-kube-api-access-dn6vg\") pod \"dcd1a841-5711-469c-b90b-74fc72f5427d\" (UID: \"dcd1a841-5711-469c-b90b-74fc72f5427d\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675380 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-net\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675399 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcd1a841-5711-469c-b90b-74fc72f5427d-cilium-config-path\") pod \"dcd1a841-5711-469c-b90b-74fc72f5427d\" (UID: \"dcd1a841-5711-469c-b90b-74fc72f5427d\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675551 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-hubble-tls\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.675846 kubelet[2619]: I0527 02:47:29.675707 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cni-path" (OuterVolumeSpecName: "cni-path") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.675985 kubelet[2619]: I0527 02:47:29.675736 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.675985 kubelet[2619]: I0527 02:47:29.675710 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.675985 kubelet[2619]: I0527 02:47:29.675763 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.675985 kubelet[2619]: I0527 02:47:29.675768 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676092 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-lib-modules\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676116 2619 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-hostproc\") pod \"69986cb3-a164-44c8-933d-f426f6a74ce9\" (UID: \"69986cb3-a164-44c8-933d-f426f6a74ce9\") " May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676170 2619 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676181 2619 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676190 2619 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676197 2619 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.676469 kubelet[2619]: I0527 02:47:29.676204 2619 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.676759 kubelet[2619]: I0527 02:47:29.676255 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-hostproc" (OuterVolumeSpecName: "hostproc") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.676759 kubelet[2619]: I0527 02:47:29.676276 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.676759 kubelet[2619]: I0527 02:47:29.676290 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.679047 kubelet[2619]: I0527 02:47:29.679012 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.679132 kubelet[2619]: I0527 02:47:29.679065 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:47:29.680356 kubelet[2619]: I0527 02:47:29.680270 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69986cb3-a164-44c8-933d-f426f6a74ce9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 02:47:29.680464 kubelet[2619]: I0527 02:47:29.680441 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-kube-api-access-q6qh8" (OuterVolumeSpecName: "kube-api-access-q6qh8") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "kube-api-access-q6qh8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:47:29.680614 kubelet[2619]: I0527 02:47:29.680587 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:47:29.680770 kubelet[2619]: I0527 02:47:29.680750 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd1a841-5711-469c-b90b-74fc72f5427d-kube-api-access-dn6vg" (OuterVolumeSpecName: "kube-api-access-dn6vg") pod "dcd1a841-5711-469c-b90b-74fc72f5427d" (UID: "dcd1a841-5711-469c-b90b-74fc72f5427d"). InnerVolumeSpecName "kube-api-access-dn6vg". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:47:29.685890 kubelet[2619]: I0527 02:47:29.685841 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd1a841-5711-469c-b90b-74fc72f5427d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dcd1a841-5711-469c-b90b-74fc72f5427d" (UID: "dcd1a841-5711-469c-b90b-74fc72f5427d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:47:29.686741 kubelet[2619]: I0527 02:47:29.686694 2619 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69986cb3-a164-44c8-933d-f426f6a74ce9" (UID: "69986cb3-a164-44c8-933d-f426f6a74ce9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:47:29.730845 kubelet[2619]: E0527 02:47:29.730710 2619 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 02:47:29.777184 kubelet[2619]: I0527 02:47:29.777137 2619 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777184 kubelet[2619]: I0527 02:47:29.777171 2619 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777184 kubelet[2619]: I0527 02:47:29.777181 2619 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777184 kubelet[2619]: I0527 02:47:29.777195 2619 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6qh8\" (UniqueName: \"kubernetes.io/projected/69986cb3-a164-44c8-933d-f426f6a74ce9-kube-api-access-q6qh8\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777206 2619 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69986cb3-a164-44c8-933d-f426f6a74ce9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777214 2619 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777222 2619 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777229 2619 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777248 2619 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dn6vg\" (UniqueName: \"kubernetes.io/projected/dcd1a841-5711-469c-b90b-74fc72f5427d-kube-api-access-dn6vg\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777257 2619 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69986cb3-a164-44c8-933d-f426f6a74ce9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.777392 kubelet[2619]: I0527 02:47:29.777265 2619 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcd1a841-5711-469c-b90b-74fc72f5427d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 02:47:29.880621 kubelet[2619]: I0527 02:47:29.880579 2619 scope.go:117] "RemoveContainer" containerID="20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07" May 27 02:47:29.882697 containerd[1531]: time="2025-05-27T02:47:29.882518789Z" level=info msg="RemoveContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\"" May 27 02:47:29.884096 systemd[1]: Removed slice kubepods-besteffort-poddcd1a841_5711_469c_b90b_74fc72f5427d.slice - libcontainer container kubepods-besteffort-poddcd1a841_5711_469c_b90b_74fc72f5427d.slice. May 27 02:47:29.889273 systemd[1]: Removed slice kubepods-burstable-pod69986cb3_a164_44c8_933d_f426f6a74ce9.slice - libcontainer container kubepods-burstable-pod69986cb3_a164_44c8_933d_f426f6a74ce9.slice. May 27 02:47:29.889374 systemd[1]: kubepods-burstable-pod69986cb3_a164_44c8_933d_f426f6a74ce9.slice: Consumed 6.698s CPU time, 122.4M memory peak, 2.1M read from disk, 12.9M written to disk. May 27 02:47:29.894502 containerd[1531]: time="2025-05-27T02:47:29.894428682Z" level=info msg="RemoveContainer for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" returns successfully" May 27 02:47:29.894835 kubelet[2619]: I0527 02:47:29.894810 2619 scope.go:117] "RemoveContainer" containerID="20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07" May 27 02:47:29.895142 containerd[1531]: time="2025-05-27T02:47:29.895100289Z" level=error msg="ContainerStatus for \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\": not found" May 27 02:47:29.906330 kubelet[2619]: E0527 02:47:29.906275 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\": not found" containerID="20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07" May 27 02:47:29.906433 kubelet[2619]: I0527 02:47:29.906339 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07"} err="failed to get container status \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\": rpc error: code = NotFound desc = an error occurred when try to find container \"20d7b94056d26aaa3734f3ef4d12894ff13b37f7cfab3778c60aa00f9224ca07\": not found" May 27 02:47:29.906433 kubelet[2619]: I0527 02:47:29.906381 2619 scope.go:117] "RemoveContainer" containerID="b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320" May 27 02:47:29.910116 containerd[1531]: time="2025-05-27T02:47:29.910085096Z" level=info msg="RemoveContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\"" May 27 02:47:29.913957 containerd[1531]: time="2025-05-27T02:47:29.913895659Z" level=info msg="RemoveContainer for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" returns successfully" May 27 02:47:29.914125 kubelet[2619]: I0527 02:47:29.914096 2619 scope.go:117] "RemoveContainer" containerID="b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba" May 27 02:47:29.915409 containerd[1531]: time="2025-05-27T02:47:29.915370035Z" level=info msg="RemoveContainer for \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\"" May 27 02:47:29.922778 containerd[1531]: time="2025-05-27T02:47:29.922666396Z" level=info msg="RemoveContainer for \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" returns successfully" May 27 02:47:29.923774 kubelet[2619]: I0527 02:47:29.922893 2619 scope.go:117] "RemoveContainer" containerID="cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6" May 27 02:47:29.925198 containerd[1531]: time="2025-05-27T02:47:29.925167424Z" level=info msg="RemoveContainer for \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\"" May 27 02:47:29.929230 containerd[1531]: time="2025-05-27T02:47:29.929195869Z" level=info msg="RemoveContainer for \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" returns successfully" May 27 02:47:29.929481 kubelet[2619]: I0527 02:47:29.929425 2619 scope.go:117] "RemoveContainer" containerID="d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b" May 27 02:47:29.931865 containerd[1531]: time="2025-05-27T02:47:29.931837819Z" level=info msg="RemoveContainer for \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\"" May 27 02:47:29.940641 containerd[1531]: time="2025-05-27T02:47:29.940601756Z" level=info msg="RemoveContainer for \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" returns successfully" May 27 02:47:29.940871 kubelet[2619]: I0527 02:47:29.940844 2619 scope.go:117] "RemoveContainer" containerID="a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe" May 27 02:47:29.942607 containerd[1531]: time="2025-05-27T02:47:29.942497457Z" level=info msg="RemoveContainer for \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\"" May 27 02:47:29.946705 containerd[1531]: time="2025-05-27T02:47:29.946653104Z" level=info msg="RemoveContainer for \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" returns successfully" May 27 02:47:29.947010 kubelet[2619]: I0527 02:47:29.946987 2619 scope.go:117] "RemoveContainer" containerID="b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320" May 27 02:47:29.947381 containerd[1531]: time="2025-05-27T02:47:29.947339511Z" level=error msg="ContainerStatus for \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\": not found" May 27 02:47:29.947640 kubelet[2619]: E0527 02:47:29.947483 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\": not found" containerID="b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320" May 27 02:47:29.947640 kubelet[2619]: I0527 02:47:29.947508 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320"} err="failed to get container status \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\": rpc error: code = NotFound desc = an error occurred when try to find container \"b08ac499969a74f8f3e44af6fe2b65e7f458f8e7108fc647d98d879fb8953320\": not found" May 27 02:47:29.947640 kubelet[2619]: I0527 02:47:29.947528 2619 scope.go:117] "RemoveContainer" containerID="b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba" May 27 02:47:29.947719 containerd[1531]: time="2025-05-27T02:47:29.947671155Z" level=error msg="ContainerStatus for \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\": not found" May 27 02:47:29.947804 kubelet[2619]: E0527 02:47:29.947780 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\": not found" containerID="b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba" May 27 02:47:29.947883 kubelet[2619]: I0527 02:47:29.947808 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba"} err="failed to get container status \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4b59c53e614546e02da667474a2f5c949fa3266ec336ce5f3c8d41907c911ba\": not found" May 27 02:47:29.947883 kubelet[2619]: I0527 02:47:29.947825 2619 scope.go:117] "RemoveContainer" containerID="cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6" May 27 02:47:29.948107 kubelet[2619]: E0527 02:47:29.948056 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\": not found" containerID="cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6" May 27 02:47:29.948107 kubelet[2619]: I0527 02:47:29.948082 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6"} err="failed to get container status \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\": not found" May 27 02:47:29.948107 kubelet[2619]: I0527 02:47:29.948097 2619 scope.go:117] "RemoveContainer" containerID="d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b" May 27 02:47:29.948409 containerd[1531]: time="2025-05-27T02:47:29.947970158Z" level=error msg="ContainerStatus for \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cebb7c337c05f8d181327fc7951c078f70519656b3948e677713138927fc53c6\": not found" May 27 02:47:29.948409 containerd[1531]: time="2025-05-27T02:47:29.948284642Z" level=error msg="ContainerStatus for \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\": not found" May 27 02:47:29.948472 kubelet[2619]: E0527 02:47:29.948414 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\": not found" containerID="d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b" May 27 02:47:29.948472 kubelet[2619]: I0527 02:47:29.948432 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b"} err="failed to get container status \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3b01473ef3a02884e6e203ae14f539d742ef2129b7bc6cec37d60ad969b4a8b\": not found" May 27 02:47:29.948472 kubelet[2619]: I0527 02:47:29.948451 2619 scope.go:117] "RemoveContainer" containerID="a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe" May 27 02:47:29.948643 containerd[1531]: time="2025-05-27T02:47:29.948572845Z" level=error msg="ContainerStatus for \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\": not found" May 27 02:47:29.948725 kubelet[2619]: E0527 02:47:29.948668 2619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\": not found" containerID="a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe" May 27 02:47:29.948774 kubelet[2619]: I0527 02:47:29.948729 2619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe"} err="failed to get container status \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8478a0e752f691be30badb1b49a98d42088c0bf5eaf9fe9316a01fac2fd6cbe\": not found" May 27 02:47:30.522535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f3a2c2725ea51f320b8af90570b1b7b88d71eb8bc9a732977ecea21f19cf201-shm.mount: Deactivated successfully. May 27 02:47:30.522634 systemd[1]: var-lib-kubelet-pods-dcd1a841\x2d5711\x2d469c\x2db90b\x2d74fc72f5427d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddn6vg.mount: Deactivated successfully. May 27 02:47:30.522695 systemd[1]: var-lib-kubelet-pods-69986cb3\x2da164\x2d44c8\x2d933d\x2df426f6a74ce9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6qh8.mount: Deactivated successfully. May 27 02:47:30.522748 systemd[1]: var-lib-kubelet-pods-69986cb3\x2da164\x2d44c8\x2d933d\x2df426f6a74ce9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 02:47:30.522793 systemd[1]: var-lib-kubelet-pods-69986cb3\x2da164\x2d44c8\x2d933d\x2df426f6a74ce9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 02:47:30.687802 kubelet[2619]: I0527 02:47:30.687052 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69986cb3-a164-44c8-933d-f426f6a74ce9" path="/var/lib/kubelet/pods/69986cb3-a164-44c8-933d-f426f6a74ce9/volumes" May 27 02:47:30.687802 kubelet[2619]: I0527 02:47:30.687559 2619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd1a841-5711-469c-b90b-74fc72f5427d" path="/var/lib/kubelet/pods/dcd1a841-5711-469c-b90b-74fc72f5427d/volumes" May 27 02:47:31.439733 sshd[4234]: Connection closed by 10.0.0.1 port 32800 May 27 02:47:31.440431 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 27 02:47:31.453097 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:32800.service: Deactivated successfully. May 27 02:47:31.454670 systemd[1]: session-23.scope: Deactivated successfully. May 27 02:47:31.454856 systemd[1]: session-23.scope: Consumed 1.474s CPU time, 23.2M memory peak. May 27 02:47:31.455959 systemd-logind[1506]: Session 23 logged out. Waiting for processes to exit. May 27 02:47:31.457863 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:32806.service - OpenSSH per-connection server daemon (10.0.0.1:32806). May 27 02:47:31.458801 systemd-logind[1506]: Removed session 23. May 27 02:47:31.510164 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 32806 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:31.511388 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:31.516242 systemd-logind[1506]: New session 24 of user core. May 27 02:47:31.529115 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 02:47:31.684177 kubelet[2619]: E0527 02:47:31.684147 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:32.125861 sshd[4388]: Connection closed by 10.0.0.1 port 32806 May 27 02:47:32.126612 sshd-session[4386]: pam_unix(sshd:session): session closed for user core May 27 02:47:32.139705 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:32806.service: Deactivated successfully. May 27 02:47:32.143498 systemd[1]: session-24.scope: Deactivated successfully. May 27 02:47:32.146862 systemd-logind[1506]: Session 24 logged out. Waiting for processes to exit. May 27 02:47:32.152316 systemd[1]: Started sshd@24-10.0.0.44:22-10.0.0.1:32810.service - OpenSSH per-connection server daemon (10.0.0.1:32810). May 27 02:47:32.156186 systemd-logind[1506]: Removed session 24. May 27 02:47:32.170768 systemd[1]: Created slice kubepods-burstable-pod0a19e24a_d737_47ff_90a3_dd4dd81b8175.slice - libcontainer container kubepods-burstable-pod0a19e24a_d737_47ff_90a3_dd4dd81b8175.slice. May 27 02:47:32.190861 kubelet[2619]: I0527 02:47:32.190770 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-lib-modules\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.190861 kubelet[2619]: I0527 02:47:32.190805 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a19e24a-d737-47ff-90a3-dd4dd81b8175-clustermesh-secrets\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.190861 kubelet[2619]: I0527 02:47:32.190826 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a19e24a-d737-47ff-90a3-dd4dd81b8175-cilium-ipsec-secrets\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.190874 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-cni-path\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.190906 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a19e24a-d737-47ff-90a3-dd4dd81b8175-hubble-tls\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.190950 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g45nt\" (UniqueName: \"kubernetes.io/projected/0a19e24a-d737-47ff-90a3-dd4dd81b8175-kube-api-access-g45nt\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.190970 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-hostproc\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.190993 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-etc-cni-netd\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191218 kubelet[2619]: I0527 02:47:32.191011 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-host-proc-sys-net\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191036 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-cilium-run\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191054 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-bpf-maps\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191067 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-host-proc-sys-kernel\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191081 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-xtables-lock\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191107 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a19e24a-d737-47ff-90a3-dd4dd81b8175-cilium-config-path\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.191356 kubelet[2619]: I0527 02:47:32.191122 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a19e24a-d737-47ff-90a3-dd4dd81b8175-cilium-cgroup\") pod \"cilium-fbfdx\" (UID: \"0a19e24a-d737-47ff-90a3-dd4dd81b8175\") " pod="kube-system/cilium-fbfdx" May 27 02:47:32.214427 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 32810 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:32.215618 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:32.221066 systemd-logind[1506]: New session 25 of user core. May 27 02:47:32.239137 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 02:47:32.289841 sshd[4402]: Connection closed by 10.0.0.1 port 32810 May 27 02:47:32.290385 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 27 02:47:32.308138 systemd[1]: sshd@24-10.0.0.44:22-10.0.0.1:32810.service: Deactivated successfully. May 27 02:47:32.311793 systemd[1]: session-25.scope: Deactivated successfully. May 27 02:47:32.312596 systemd-logind[1506]: Session 25 logged out. Waiting for processes to exit. May 27 02:47:32.315140 systemd[1]: Started sshd@25-10.0.0.44:22-10.0.0.1:32824.service - OpenSSH per-connection server daemon (10.0.0.1:32824). May 27 02:47:32.316768 systemd-logind[1506]: Removed session 25. May 27 02:47:32.370070 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 32824 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:47:32.370815 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:32.376024 systemd-logind[1506]: New session 26 of user core. May 27 02:47:32.395156 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 02:47:32.475300 kubelet[2619]: E0527 02:47:32.474038 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:32.476651 containerd[1531]: time="2025-05-27T02:47:32.475601487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbfdx,Uid:0a19e24a-d737-47ff-90a3-dd4dd81b8175,Namespace:kube-system,Attempt:0,}" May 27 02:47:32.496579 containerd[1531]: time="2025-05-27T02:47:32.496434824Z" level=info msg="connecting to shim 35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:32.529139 systemd[1]: Started cri-containerd-35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d.scope - libcontainer container 35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d. May 27 02:47:32.550719 containerd[1531]: time="2025-05-27T02:47:32.550608429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbfdx,Uid:0a19e24a-d737-47ff-90a3-dd4dd81b8175,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\"" May 27 02:47:32.551403 kubelet[2619]: E0527 02:47:32.551366 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:32.555629 containerd[1531]: time="2025-05-27T02:47:32.555581241Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:47:32.561966 containerd[1531]: time="2025-05-27T02:47:32.561078458Z" level=info msg="Container 64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:32.566519 containerd[1531]: time="2025-05-27T02:47:32.566480275Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\"" May 27 02:47:32.566973 containerd[1531]: time="2025-05-27T02:47:32.566952160Z" level=info msg="StartContainer for \"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\"" May 27 02:47:32.567896 containerd[1531]: time="2025-05-27T02:47:32.567856689Z" level=info msg="connecting to shim 64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" protocol=ttrpc version=3 May 27 02:47:32.587103 systemd[1]: Started cri-containerd-64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e.scope - libcontainer container 64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e. May 27 02:47:32.610023 containerd[1531]: time="2025-05-27T02:47:32.609975648Z" level=info msg="StartContainer for \"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\" returns successfully" May 27 02:47:32.619808 systemd[1]: cri-containerd-64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e.scope: Deactivated successfully. May 27 02:47:32.621814 containerd[1531]: time="2025-05-27T02:47:32.621725251Z" level=info msg="received exit event container_id:\"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\" id:\"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\" pid:4483 exited_at:{seconds:1748314052 nanos:621413728}" May 27 02:47:32.621906 containerd[1531]: time="2025-05-27T02:47:32.621882293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\" id:\"64666c941015f734097e277e34be2fe5acea63fd589c7b42480e32428855e74e\" pid:4483 exited_at:{seconds:1748314052 nanos:621413728}" May 27 02:47:32.891197 kubelet[2619]: E0527 02:47:32.891168 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:32.896735 containerd[1531]: time="2025-05-27T02:47:32.896689638Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:47:32.907545 containerd[1531]: time="2025-05-27T02:47:32.907504471Z" level=info msg="Container 2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:32.913973 containerd[1531]: time="2025-05-27T02:47:32.913920658Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\"" May 27 02:47:32.915346 containerd[1531]: time="2025-05-27T02:47:32.915316793Z" level=info msg="StartContainer for \"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\"" May 27 02:47:32.917272 containerd[1531]: time="2025-05-27T02:47:32.917224172Z" level=info msg="connecting to shim 2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" protocol=ttrpc version=3 May 27 02:47:32.939095 systemd[1]: Started cri-containerd-2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb.scope - libcontainer container 2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb. May 27 02:47:32.964685 containerd[1531]: time="2025-05-27T02:47:32.964612987Z" level=info msg="StartContainer for \"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\" returns successfully" May 27 02:47:32.981402 systemd[1]: cri-containerd-2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb.scope: Deactivated successfully. May 27 02:47:32.982844 containerd[1531]: time="2025-05-27T02:47:32.982612134Z" level=info msg="received exit event container_id:\"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\" id:\"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\" pid:4528 exited_at:{seconds:1748314052 nanos:982446773}" May 27 02:47:32.982844 containerd[1531]: time="2025-05-27T02:47:32.982818456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\" id:\"2667e19944d816291b4bc3ee38032779476ac9dacb295f6cece8cec1c29bdfeb\" pid:4528 exited_at:{seconds:1748314052 nanos:982446773}" May 27 02:47:33.894975 kubelet[2619]: E0527 02:47:33.894859 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:33.900059 containerd[1531]: time="2025-05-27T02:47:33.900021742Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:47:33.907961 containerd[1531]: time="2025-05-27T02:47:33.906823852Z" level=info msg="Container 0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:33.916812 containerd[1531]: time="2025-05-27T02:47:33.916695873Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\"" May 27 02:47:33.917327 containerd[1531]: time="2025-05-27T02:47:33.917198918Z" level=info msg="StartContainer for \"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\"" May 27 02:47:33.918790 containerd[1531]: time="2025-05-27T02:47:33.918763894Z" level=info msg="connecting to shim 0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" protocol=ttrpc version=3 May 27 02:47:33.940070 systemd[1]: Started cri-containerd-0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac.scope - libcontainer container 0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac. May 27 02:47:33.970627 systemd[1]: cri-containerd-0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac.scope: Deactivated successfully. May 27 02:47:33.975116 containerd[1531]: time="2025-05-27T02:47:33.975030628Z" level=info msg="received exit event container_id:\"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\" id:\"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\" pid:4572 exited_at:{seconds:1748314053 nanos:974759625}" May 27 02:47:33.975116 containerd[1531]: time="2025-05-27T02:47:33.975066028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\" id:\"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\" pid:4572 exited_at:{seconds:1748314053 nanos:974759625}" May 27 02:47:33.975614 containerd[1531]: time="2025-05-27T02:47:33.975568634Z" level=info msg="StartContainer for \"0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac\" returns successfully" May 27 02:47:33.994883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df397b3a4b11d9ff14cf06ca8c6ce9a724add9ea5d3502b98914971e0b0c3ac-rootfs.mount: Deactivated successfully. May 27 02:47:34.731961 kubelet[2619]: E0527 02:47:34.731904 2619 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 02:47:34.899807 kubelet[2619]: E0527 02:47:34.899777 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:34.904406 containerd[1531]: time="2025-05-27T02:47:34.904369880Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:47:34.914430 containerd[1531]: time="2025-05-27T02:47:34.914384860Z" level=info msg="Container 4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:34.922501 containerd[1531]: time="2025-05-27T02:47:34.922458981Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\"" May 27 02:47:34.922959 containerd[1531]: time="2025-05-27T02:47:34.922930306Z" level=info msg="StartContainer for \"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\"" May 27 02:47:34.923778 containerd[1531]: time="2025-05-27T02:47:34.923756594Z" level=info msg="connecting to shim 4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" protocol=ttrpc version=3 May 27 02:47:34.945121 systemd[1]: Started cri-containerd-4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768.scope - libcontainer container 4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768. May 27 02:47:34.965450 systemd[1]: cri-containerd-4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768.scope: Deactivated successfully. May 27 02:47:34.965901 containerd[1531]: time="2025-05-27T02:47:34.965816414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\" id:\"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\" pid:4613 exited_at:{seconds:1748314054 nanos:965654533}" May 27 02:47:34.966208 containerd[1531]: time="2025-05-27T02:47:34.966178338Z" level=info msg="received exit event container_id:\"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\" id:\"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\" pid:4613 exited_at:{seconds:1748314054 nanos:965654533}" May 27 02:47:34.969177 containerd[1531]: time="2025-05-27T02:47:34.969084727Z" level=info msg="StartContainer for \"4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768\" returns successfully" May 27 02:47:34.984504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a17dadde4a574945b873588544f5f7b920f5db746f98fc92c1deab23e1ae768-rootfs.mount: Deactivated successfully. May 27 02:47:35.904436 kubelet[2619]: E0527 02:47:35.904305 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:35.916872 containerd[1531]: time="2025-05-27T02:47:35.915175071Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:47:35.925114 containerd[1531]: time="2025-05-27T02:47:35.925071088Z" level=info msg="Container 70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:35.937473 containerd[1531]: time="2025-05-27T02:47:35.937423929Z" level=info msg="CreateContainer within sandbox \"35f5be25077af1a35c3ab96a71d9a43a8e163469944f6f21c34d4cd83c58326d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\"" May 27 02:47:35.942151 containerd[1531]: time="2025-05-27T02:47:35.939076105Z" level=info msg="StartContainer for \"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\"" May 27 02:47:35.942351 containerd[1531]: time="2025-05-27T02:47:35.942327377Z" level=info msg="connecting to shim 70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f" address="unix:///run/containerd/s/3b6fd103ecf3af9f7dc922be91d35c7d12e9615f3177f4008813816f968d48b8" protocol=ttrpc version=3 May 27 02:47:35.975146 systemd[1]: Started cri-containerd-70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f.scope - libcontainer container 70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f. May 27 02:47:36.003947 containerd[1531]: time="2025-05-27T02:47:36.003907579Z" level=info msg="StartContainer for \"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" returns successfully" May 27 02:47:36.060724 containerd[1531]: time="2025-05-27T02:47:36.060686203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" id:\"17579a630974903e495ab62dc532d6e18abf2d0f694d3261f4c804e97e651b55\" pid:4680 exited_at:{seconds:1748314056 nanos:60241478}" May 27 02:47:36.279971 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 02:47:36.550419 kubelet[2619]: I0527 02:47:36.550171 2619 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T02:47:36Z","lastTransitionTime":"2025-05-27T02:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 02:47:36.910630 kubelet[2619]: E0527 02:47:36.910512 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:36.927033 kubelet[2619]: I0527 02:47:36.926875 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fbfdx" podStartSLOduration=4.926859424 podStartE2EDuration="4.926859424s" podCreationTimestamp="2025-05-27 02:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:47:36.926531461 +0000 UTC m=+82.327031934" watchObservedRunningTime="2025-05-27 02:47:36.926859424 +0000 UTC m=+82.327359897" May 27 02:47:38.474813 kubelet[2619]: E0527 02:47:38.474758 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:38.936530 containerd[1531]: time="2025-05-27T02:47:38.936431809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" id:\"e006d50fa2b4f1b5f36e2efecc8b1172411bb5aec2f317d0edb888246a0a097f\" pid:5113 exit_status:1 exited_at:{seconds:1748314058 nanos:928365375}" May 27 02:47:39.089363 systemd-networkd[1437]: lxc_health: Link UP May 27 02:47:39.090330 systemd-networkd[1437]: lxc_health: Gained carrier May 27 02:47:40.476351 kubelet[2619]: E0527 02:47:40.476131 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:40.684874 kubelet[2619]: E0527 02:47:40.684839 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:40.919717 kubelet[2619]: E0527 02:47:40.919686 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:41.002116 systemd-networkd[1437]: lxc_health: Gained IPv6LL May 27 02:47:41.053212 containerd[1531]: time="2025-05-27T02:47:41.053155672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" id:\"a5603684436162a93739ddf7ce9615129e93b0e5f1ac700252b41d09bdb99717\" pid:5227 exited_at:{seconds:1748314061 nanos:51188335}" May 27 02:47:41.055176 kubelet[2619]: E0527 02:47:41.055121 2619 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:37328->127.0.0.1:36989: write tcp 10.0.0.44:10250->10.0.0.44:37416: write: broken pipe May 27 02:47:41.055267 kubelet[2619]: E0527 02:47:41.055248 2619 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37328->127.0.0.1:36989: write tcp 127.0.0.1:37328->127.0.0.1:36989: write: broken pipe May 27 02:47:41.921263 kubelet[2619]: E0527 02:47:41.921221 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:47:43.186900 containerd[1531]: time="2025-05-27T02:47:43.186861460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" id:\"10f0be1bffee0b8dd4de5907c251e06f912701b44ecb9c89b3a98271f9725f27\" pid:5261 exited_at:{seconds:1748314063 nanos:186472937}" May 27 02:47:45.301105 containerd[1531]: time="2025-05-27T02:47:45.300998606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f2e544eeff467f59cb8817468f4f82216ade2fef1176344b0b685b64e79b0f\" id:\"03be97263c8175cc5f753418a8593d51c185975f190add2d811065475e78c5a8\" pid:5285 exited_at:{seconds:1748314065 nanos:300679323}" May 27 02:47:45.305984 sshd[4415]: Connection closed by 10.0.0.1 port 32824 May 27 02:47:45.306752 sshd-session[4413]: pam_unix(sshd:session): session closed for user core May 27 02:47:45.311189 systemd[1]: sshd@25-10.0.0.44:22-10.0.0.1:32824.service: Deactivated successfully. May 27 02:47:45.316689 systemd[1]: session-26.scope: Deactivated successfully. May 27 02:47:45.317656 systemd-logind[1506]: Session 26 logged out. Waiting for processes to exit. May 27 02:47:45.322631 systemd-logind[1506]: Removed session 26.