May 7 23:59:11.904687 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 7 23:59:11.904722 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 7 23:59:11.904732 kernel: KASLR enabled May 7 23:59:11.904738 kernel: efi: EFI v2.7 by EDK II May 7 23:59:11.904743 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 7 23:59:11.904749 kernel: random: crng init done May 7 23:59:11.904756 kernel: secureboot: Secure boot disabled May 7 23:59:11.904762 kernel: ACPI: Early table checksum verification disabled May 7 23:59:11.904768 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 7 23:59:11.904775 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 7 23:59:11.904781 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904787 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904793 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904799 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904806 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904814 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904821 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904827 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904833 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:59:11.904839 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 7 23:59:11.904845 kernel: NUMA: Failed to initialise from firmware May 7 23:59:11.904851 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:59:11.904857 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 7 23:59:11.904863 kernel: Zone ranges: May 7 23:59:11.904869 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:59:11.904876 kernel: DMA32 empty May 7 23:59:11.904882 kernel: Normal empty May 7 23:59:11.904888 kernel: Movable zone start for each node May 7 23:59:11.904895 kernel: Early memory node ranges May 7 23:59:11.904901 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 7 23:59:11.904907 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 7 23:59:11.904913 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 7 23:59:11.904919 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 7 23:59:11.904925 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 7 23:59:11.904931 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 7 23:59:11.904937 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 7 23:59:11.904943 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 7 23:59:11.904951 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 7 23:59:11.904957 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:59:11.904963 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 7 23:59:11.904972 kernel: psci: probing for conduit method from ACPI. May 7 23:59:11.904978 kernel: psci: PSCIv1.1 detected in firmware. May 7 23:59:11.904985 kernel: psci: Using standard PSCI v0.2 function IDs May 7 23:59:11.904993 kernel: psci: Trusted OS migration not required May 7 23:59:11.904999 kernel: psci: SMC Calling Convention v1.1 May 7 23:59:11.905006 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 7 23:59:11.905012 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 7 23:59:11.905019 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 7 23:59:11.905025 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 7 23:59:11.905032 kernel: Detected PIPT I-cache on CPU0 May 7 23:59:11.905038 kernel: CPU features: detected: GIC system register CPU interface May 7 23:59:11.905045 kernel: CPU features: detected: Hardware dirty bit management May 7 23:59:11.905051 kernel: CPU features: detected: Spectre-v4 May 7 23:59:11.905059 kernel: CPU features: detected: Spectre-BHB May 7 23:59:11.905065 kernel: CPU features: kernel page table isolation forced ON by KASLR May 7 23:59:11.905072 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 7 23:59:11.905078 kernel: CPU features: detected: ARM erratum 1418040 May 7 23:59:11.905085 kernel: CPU features: detected: SSBS not fully self-synchronizing May 7 23:59:11.905091 kernel: alternatives: applying boot alternatives May 7 23:59:11.905098 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:59:11.905105 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 7 23:59:11.905112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 7 23:59:11.905119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 7 23:59:11.905125 kernel: Fallback order for Node 0: 0 May 7 23:59:11.905133 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 7 23:59:11.905139 kernel: Policy zone: DMA May 7 23:59:11.905146 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 7 23:59:11.905152 kernel: software IO TLB: area num 4. May 7 23:59:11.905159 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 7 23:59:11.905165 kernel: Memory: 2387472K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184816K reserved, 0K cma-reserved) May 7 23:59:11.905172 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 7 23:59:11.905179 kernel: rcu: Preemptible hierarchical RCU implementation. May 7 23:59:11.905185 kernel: rcu: RCU event tracing is enabled. May 7 23:59:11.905192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 7 23:59:11.905199 kernel: Trampoline variant of Tasks RCU enabled. May 7 23:59:11.905206 kernel: Tracing variant of Tasks RCU enabled. May 7 23:59:11.905213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 7 23:59:11.905220 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 7 23:59:11.905227 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 7 23:59:11.905233 kernel: GICv3: 256 SPIs implemented May 7 23:59:11.905239 kernel: GICv3: 0 Extended SPIs implemented May 7 23:59:11.905246 kernel: Root IRQ handler: gic_handle_irq May 7 23:59:11.905252 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 7 23:59:11.905259 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 7 23:59:11.905265 kernel: ITS [mem 0x08080000-0x0809ffff] May 7 23:59:11.905280 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 7 23:59:11.905287 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 7 23:59:11.905295 kernel: GICv3: using LPI property table @0x00000000400f0000 May 7 23:59:11.905302 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 7 23:59:11.905308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 7 23:59:11.905321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:59:11.905327 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 7 23:59:11.905334 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 7 23:59:11.905340 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 7 23:59:11.905347 kernel: arm-pv: using stolen time PV May 7 23:59:11.905354 kernel: Console: colour dummy device 80x25 May 7 23:59:11.905360 kernel: ACPI: Core revision 20230628 May 7 23:59:11.905367 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 7 23:59:11.905375 kernel: pid_max: default: 32768 minimum: 301 May 7 23:59:11.905382 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 7 23:59:11.905389 kernel: landlock: Up and running. May 7 23:59:11.905395 kernel: SELinux: Initializing. May 7 23:59:11.905402 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:59:11.905409 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:59:11.905421 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:59:11.905428 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:59:11.905434 kernel: rcu: Hierarchical SRCU implementation. May 7 23:59:11.905443 kernel: rcu: Max phase no-delay instances is 400. May 7 23:59:11.905450 kernel: Platform MSI: ITS@0x8080000 domain created May 7 23:59:11.905457 kernel: PCI/MSI: ITS@0x8080000 domain created May 7 23:59:11.905463 kernel: Remapping and enabling EFI services. May 7 23:59:11.905470 kernel: smp: Bringing up secondary CPUs ... May 7 23:59:11.905477 kernel: Detected PIPT I-cache on CPU1 May 7 23:59:11.905483 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 7 23:59:11.905490 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 7 23:59:11.905497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:59:11.905505 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 7 23:59:11.905512 kernel: Detected PIPT I-cache on CPU2 May 7 23:59:11.905523 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 7 23:59:11.905531 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 7 23:59:11.905538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:59:11.905545 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 7 23:59:11.905552 kernel: Detected PIPT I-cache on CPU3 May 7 23:59:11.905559 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 7 23:59:11.905567 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 7 23:59:11.905575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:59:11.905582 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 7 23:59:11.905589 kernel: smp: Brought up 1 node, 4 CPUs May 7 23:59:11.905596 kernel: SMP: Total of 4 processors activated. May 7 23:59:11.905603 kernel: CPU features: detected: 32-bit EL0 Support May 7 23:59:11.905610 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 7 23:59:11.905617 kernel: CPU features: detected: Common not Private translations May 7 23:59:11.905624 kernel: CPU features: detected: CRC32 instructions May 7 23:59:11.905632 kernel: CPU features: detected: Enhanced Virtualization Traps May 7 23:59:11.905639 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 7 23:59:11.905646 kernel: CPU features: detected: LSE atomic instructions May 7 23:59:11.905653 kernel: CPU features: detected: Privileged Access Never May 7 23:59:11.905660 kernel: CPU features: detected: RAS Extension Support May 7 23:59:11.905667 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 7 23:59:11.905674 kernel: CPU: All CPU(s) started at EL1 May 7 23:59:11.905681 kernel: alternatives: applying system-wide alternatives May 7 23:59:11.905688 kernel: devtmpfs: initialized May 7 23:59:11.905695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 7 23:59:11.905703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 7 23:59:11.905710 kernel: pinctrl core: initialized pinctrl subsystem May 7 23:59:11.905717 kernel: SMBIOS 3.0.0 present. May 7 23:59:11.905724 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 7 23:59:11.905731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 7 23:59:11.905738 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 7 23:59:11.905745 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 7 23:59:11.905753 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 7 23:59:11.905761 kernel: audit: initializing netlink subsys (disabled) May 7 23:59:11.905768 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 7 23:59:11.905775 kernel: thermal_sys: Registered thermal governor 'step_wise' May 7 23:59:11.905782 kernel: cpuidle: using governor menu May 7 23:59:11.905789 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 7 23:59:11.905796 kernel: ASID allocator initialised with 32768 entries May 7 23:59:11.905803 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 7 23:59:11.905809 kernel: Serial: AMBA PL011 UART driver May 7 23:59:11.905816 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 7 23:59:11.905825 kernel: Modules: 0 pages in range for non-PLT usage May 7 23:59:11.905832 kernel: Modules: 509264 pages in range for PLT usage May 7 23:59:11.905839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 7 23:59:11.905846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 7 23:59:11.905853 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 7 23:59:11.905861 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 7 23:59:11.905868 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 7 23:59:11.905875 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 7 23:59:11.905882 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 7 23:59:11.905890 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 7 23:59:11.905897 kernel: ACPI: Added _OSI(Module Device) May 7 23:59:11.905904 kernel: ACPI: Added _OSI(Processor Device) May 7 23:59:11.905911 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 7 23:59:11.905917 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 7 23:59:11.905924 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 7 23:59:11.905931 kernel: ACPI: Interpreter enabled May 7 23:59:11.905938 kernel: ACPI: Using GIC for interrupt routing May 7 23:59:11.905945 kernel: ACPI: MCFG table detected, 1 entries May 7 23:59:11.905952 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 7 23:59:11.905960 kernel: printk: console [ttyAMA0] enabled May 7 23:59:11.905967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 7 23:59:11.906087 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 7 23:59:11.906159 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 7 23:59:11.906223 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 7 23:59:11.906313 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 7 23:59:11.906376 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 7 23:59:11.906390 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 7 23:59:11.906397 kernel: PCI host bridge to bus 0000:00 May 7 23:59:11.906470 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 7 23:59:11.906528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 7 23:59:11.906584 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 7 23:59:11.906639 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 7 23:59:11.906714 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 7 23:59:11.906796 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 7 23:59:11.906862 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 7 23:59:11.906925 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 7 23:59:11.906988 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:59:11.907051 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:59:11.907114 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 7 23:59:11.907180 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 7 23:59:11.907237 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 7 23:59:11.907307 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 7 23:59:11.907365 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 7 23:59:11.907374 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 7 23:59:11.907382 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 7 23:59:11.907389 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 7 23:59:11.907396 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 7 23:59:11.907406 kernel: iommu: Default domain type: Translated May 7 23:59:11.907413 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 7 23:59:11.907426 kernel: efivars: Registered efivars operations May 7 23:59:11.907433 kernel: vgaarb: loaded May 7 23:59:11.907440 kernel: clocksource: Switched to clocksource arch_sys_counter May 7 23:59:11.907447 kernel: VFS: Disk quotas dquot_6.6.0 May 7 23:59:11.907455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 7 23:59:11.907462 kernel: pnp: PnP ACPI init May 7 23:59:11.907540 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 7 23:59:11.907552 kernel: pnp: PnP ACPI: found 1 devices May 7 23:59:11.907560 kernel: NET: Registered PF_INET protocol family May 7 23:59:11.907567 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 7 23:59:11.907574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 7 23:59:11.907581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 7 23:59:11.907588 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 7 23:59:11.907596 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 7 23:59:11.907603 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 7 23:59:11.907611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:59:11.907619 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:59:11.907626 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 7 23:59:11.907632 kernel: PCI: CLS 0 bytes, default 64 May 7 23:59:11.907639 kernel: kvm [1]: HYP mode not available May 7 23:59:11.907646 kernel: Initialise system trusted keyrings May 7 23:59:11.907653 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 7 23:59:11.907660 kernel: Key type asymmetric registered May 7 23:59:11.907667 kernel: Asymmetric key parser 'x509' registered May 7 23:59:11.907676 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 7 23:59:11.907683 kernel: io scheduler mq-deadline registered May 7 23:59:11.907690 kernel: io scheduler kyber registered May 7 23:59:11.907696 kernel: io scheduler bfq registered May 7 23:59:11.907704 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 7 23:59:11.907711 kernel: ACPI: button: Power Button [PWRB] May 7 23:59:11.907718 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 7 23:59:11.907782 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 7 23:59:11.907792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 7 23:59:11.907801 kernel: thunder_xcv, ver 1.0 May 7 23:59:11.907808 kernel: thunder_bgx, ver 1.0 May 7 23:59:11.907815 kernel: nicpf, ver 1.0 May 7 23:59:11.907822 kernel: nicvf, ver 1.0 May 7 23:59:11.907891 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 7 23:59:11.907950 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-07T23:59:11 UTC (1746662351) May 7 23:59:11.907960 kernel: hid: raw HID events driver (C) Jiri Kosina May 7 23:59:11.907967 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 7 23:59:11.907974 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 7 23:59:11.907983 kernel: watchdog: Hard watchdog permanently disabled May 7 23:59:11.907990 kernel: NET: Registered PF_INET6 protocol family May 7 23:59:11.907997 kernel: Segment Routing with IPv6 May 7 23:59:11.908004 kernel: In-situ OAM (IOAM) with IPv6 May 7 23:59:11.908011 kernel: NET: Registered PF_PACKET protocol family May 7 23:59:11.908018 kernel: Key type dns_resolver registered May 7 23:59:11.908025 kernel: registered taskstats version 1 May 7 23:59:11.908032 kernel: Loading compiled-in X.509 certificates May 7 23:59:11.908039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 7 23:59:11.908048 kernel: Key type .fscrypt registered May 7 23:59:11.908055 kernel: Key type fscrypt-provisioning registered May 7 23:59:11.908062 kernel: ima: No TPM chip found, activating TPM-bypass! May 7 23:59:11.908069 kernel: ima: Allocated hash algorithm: sha1 May 7 23:59:11.908076 kernel: ima: No architecture policies found May 7 23:59:11.908083 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 7 23:59:11.908090 kernel: clk: Disabling unused clocks May 7 23:59:11.908097 kernel: Freeing unused kernel memory: 38336K May 7 23:59:11.908105 kernel: Run /init as init process May 7 23:59:11.908112 kernel: with arguments: May 7 23:59:11.908119 kernel: /init May 7 23:59:11.908126 kernel: with environment: May 7 23:59:11.908133 kernel: HOME=/ May 7 23:59:11.908140 kernel: TERM=linux May 7 23:59:11.908146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 7 23:59:11.908154 systemd[1]: Successfully made /usr/ read-only. May 7 23:59:11.908164 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:59:11.908173 systemd[1]: Detected virtualization kvm. May 7 23:59:11.908181 systemd[1]: Detected architecture arm64. May 7 23:59:11.908188 systemd[1]: Running in initrd. May 7 23:59:11.908195 systemd[1]: No hostname configured, using default hostname. May 7 23:59:11.908203 systemd[1]: Hostname set to . May 7 23:59:11.908211 systemd[1]: Initializing machine ID from VM UUID. May 7 23:59:11.908218 systemd[1]: Queued start job for default target initrd.target. May 7 23:59:11.908227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:59:11.908235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:59:11.908243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 7 23:59:11.908250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:59:11.908258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 7 23:59:11.908266 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 7 23:59:11.908289 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 7 23:59:11.908299 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 7 23:59:11.908307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:59:11.908314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:59:11.908322 systemd[1]: Reached target paths.target - Path Units. May 7 23:59:11.908329 systemd[1]: Reached target slices.target - Slice Units. May 7 23:59:11.908337 systemd[1]: Reached target swap.target - Swaps. May 7 23:59:11.908344 systemd[1]: Reached target timers.target - Timer Units. May 7 23:59:11.908352 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:59:11.908360 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:59:11.908369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 7 23:59:11.908377 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 7 23:59:11.908384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:59:11.908392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:59:11.908400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:59:11.908407 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:59:11.908421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 7 23:59:11.908430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:59:11.908439 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 7 23:59:11.908447 systemd[1]: Starting systemd-fsck-usr.service... May 7 23:59:11.908455 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:59:11.908462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:59:11.908470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:59:11.908477 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 7 23:59:11.908485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:59:11.908494 systemd[1]: Finished systemd-fsck-usr.service. May 7 23:59:11.908519 systemd-journald[238]: Collecting audit messages is disabled. May 7 23:59:11.908539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:59:11.908547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:59:11.908556 systemd-journald[238]: Journal started May 7 23:59:11.908574 systemd-journald[238]: Runtime Journal (/run/log/journal/e09888859fe247bd9c66a3f2494ba112) is 5.9M, max 47.3M, 41.4M free. May 7 23:59:11.900304 systemd-modules-load[239]: Inserted module 'overlay' May 7 23:59:11.911801 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:59:11.913301 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:59:11.913320 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 7 23:59:11.916347 systemd-modules-load[239]: Inserted module 'br_netfilter' May 7 23:59:11.917304 kernel: Bridge firewalling registered May 7 23:59:11.918293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:59:11.931512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:59:11.933130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:59:11.935098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:59:11.938436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:59:11.943264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:59:11.945871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:59:11.947935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:59:11.965475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:59:11.966616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:59:11.971524 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 7 23:59:11.983328 dracut-cmdline[279]: dracut-dracut-053 May 7 23:59:11.985790 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:59:11.994485 systemd-resolved[276]: Positive Trust Anchors: May 7 23:59:11.994500 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:59:11.994535 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:59:11.999148 systemd-resolved[276]: Defaulting to hostname 'linux'. May 7 23:59:12.000305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:59:12.003953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:59:12.058298 kernel: SCSI subsystem initialized May 7 23:59:12.063287 kernel: Loading iSCSI transport class v2.0-870. May 7 23:59:12.070299 kernel: iscsi: registered transport (tcp) May 7 23:59:12.082615 kernel: iscsi: registered transport (qla4xxx) May 7 23:59:12.082628 kernel: QLogic iSCSI HBA Driver May 7 23:59:12.123206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 7 23:59:12.143422 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 7 23:59:12.161374 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 7 23:59:12.161407 kernel: device-mapper: uevent: version 1.0.3 May 7 23:59:12.162979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 7 23:59:12.207307 kernel: raid6: neonx8 gen() 15764 MB/s May 7 23:59:12.224292 kernel: raid6: neonx4 gen() 15767 MB/s May 7 23:59:12.241291 kernel: raid6: neonx2 gen() 13171 MB/s May 7 23:59:12.258292 kernel: raid6: neonx1 gen() 10451 MB/s May 7 23:59:12.275291 kernel: raid6: int64x8 gen() 6761 MB/s May 7 23:59:12.292290 kernel: raid6: int64x4 gen() 7325 MB/s May 7 23:59:12.309289 kernel: raid6: int64x2 gen() 6092 MB/s May 7 23:59:12.326372 kernel: raid6: int64x1 gen() 5041 MB/s May 7 23:59:12.326397 kernel: raid6: using algorithm neonx4 gen() 15767 MB/s May 7 23:59:12.344354 kernel: raid6: .... xor() 12436 MB/s, rmw enabled May 7 23:59:12.344388 kernel: raid6: using neon recovery algorithm May 7 23:59:12.349502 kernel: xor: measuring software checksum speed May 7 23:59:12.349515 kernel: 8regs : 21080 MB/sec May 7 23:59:12.350786 kernel: 32regs : 21265 MB/sec May 7 23:59:12.350809 kernel: arm64_neon : 27804 MB/sec May 7 23:59:12.350826 kernel: xor: using function: arm64_neon (27804 MB/sec) May 7 23:59:12.402305 kernel: Btrfs loaded, zoned=no, fsverity=no May 7 23:59:12.412152 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 7 23:59:12.427464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:59:12.440622 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 7 23:59:12.444239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:59:12.450403 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 7 23:59:12.461434 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 7 23:59:12.485912 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:59:12.494426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:59:12.534445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:59:12.541397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 7 23:59:12.550597 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 7 23:59:12.553011 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:59:12.556248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:59:12.557509 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:59:12.564406 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 7 23:59:12.574610 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 7 23:59:12.588292 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 7 23:59:12.599642 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 7 23:59:12.599738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 7 23:59:12.599755 kernel: GPT:9289727 != 19775487 May 7 23:59:12.599766 kernel: GPT:Alternate GPT header not at the end of the disk. May 7 23:59:12.599782 kernel: GPT:9289727 != 19775487 May 7 23:59:12.599794 kernel: GPT: Use GNU Parted to correct GPT errors. May 7 23:59:12.599807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:59:12.596855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:59:12.596966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:59:12.599489 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:59:12.601085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:59:12.601208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:59:12.606315 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:59:12.615639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:59:12.624324 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (519) May 7 23:59:12.624358 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (524) May 7 23:59:12.633668 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 7 23:59:12.635037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:59:12.652358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 7 23:59:12.659884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:59:12.666094 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 7 23:59:12.667299 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 7 23:59:12.677389 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 7 23:59:12.682447 disk-uuid[554]: Primary Header is updated. May 7 23:59:12.682447 disk-uuid[554]: Secondary Entries is updated. May 7 23:59:12.682447 disk-uuid[554]: Secondary Header is updated. May 7 23:59:12.682554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:59:12.689551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:59:12.696295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:59:12.701505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:59:13.697297 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:59:13.697960 disk-uuid[555]: The operation has completed successfully. May 7 23:59:13.721521 systemd[1]: disk-uuid.service: Deactivated successfully. May 7 23:59:13.721613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 7 23:59:13.755403 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 7 23:59:13.758289 sh[575]: Success May 7 23:59:13.774921 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 7 23:59:13.803751 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 7 23:59:13.817599 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 7 23:59:13.820912 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 7 23:59:13.829008 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 7 23:59:13.829037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 7 23:59:13.830169 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 7 23:59:13.830185 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 7 23:59:13.831312 kernel: BTRFS info (device dm-0): using free space tree May 7 23:59:13.836222 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 7 23:59:13.837320 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 7 23:59:13.838082 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 7 23:59:13.841188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 7 23:59:13.857034 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:59:13.857072 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:59:13.857083 kernel: BTRFS info (device vda6): using free space tree May 7 23:59:13.861312 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:59:13.865292 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:59:13.868111 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 7 23:59:13.875425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 7 23:59:13.935318 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:59:13.944503 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:59:13.978752 systemd-networkd[759]: lo: Link UP May 7 23:59:13.978763 systemd-networkd[759]: lo: Gained carrier May 7 23:59:13.979079 ignition[671]: Ignition 2.20.0 May 7 23:59:13.979660 systemd-networkd[759]: Enumeration completed May 7 23:59:13.979086 ignition[671]: Stage: fetch-offline May 7 23:59:13.979772 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:59:13.979117 ignition[671]: no configs at "/usr/lib/ignition/base.d" May 7 23:59:13.980266 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:59:13.979125 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:13.980281 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:59:13.979337 ignition[671]: parsed url from cmdline: "" May 7 23:59:13.980925 systemd-networkd[759]: eth0: Link UP May 7 23:59:13.979340 ignition[671]: no config URL provided May 7 23:59:13.980928 systemd-networkd[759]: eth0: Gained carrier May 7 23:59:13.979345 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:59:13.980934 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:59:13.979352 ignition[671]: no config at "/usr/lib/ignition/user.ign" May 7 23:59:13.982337 systemd[1]: Reached target network.target - Network. May 7 23:59:13.979373 ignition[671]: op(1): [started] loading QEMU firmware config module May 7 23:59:13.979378 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" May 7 23:59:13.993624 ignition[671]: op(1): [finished] loading QEMU firmware config module May 7 23:59:14.003321 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:59:14.037095 ignition[671]: parsing config with SHA512: 256407307fe3a632ab4f1028b8cb627e9c2fe6a01ad71c0a98ea07625c7e674b912963afb123f5b5096456e4b0a76aef8dba13d1d240be28d0453e05d86ddc18 May 7 23:59:14.042861 unknown[671]: fetched base config from "system" May 7 23:59:14.042877 unknown[671]: fetched user config from "qemu" May 7 23:59:14.043817 ignition[671]: fetch-offline: fetch-offline passed May 7 23:59:14.043923 ignition[671]: Ignition finished successfully May 7 23:59:14.047111 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:59:14.049553 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 7 23:59:14.061461 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 7 23:59:14.074386 ignition[770]: Ignition 2.20.0 May 7 23:59:14.074396 ignition[770]: Stage: kargs May 7 23:59:14.074570 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 7 23:59:14.077166 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 7 23:59:14.074580 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:14.075462 ignition[770]: kargs: kargs passed May 7 23:59:14.075509 ignition[770]: Ignition finished successfully May 7 23:59:14.090496 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 7 23:59:14.099957 ignition[778]: Ignition 2.20.0 May 7 23:59:14.099968 ignition[778]: Stage: disks May 7 23:59:14.100129 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 7 23:59:14.100139 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:14.102118 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 7 23:59:14.101034 ignition[778]: disks: disks passed May 7 23:59:14.103606 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 7 23:59:14.101077 ignition[778]: Ignition finished successfully May 7 23:59:14.105138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 7 23:59:14.107059 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:59:14.108510 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:59:14.110291 systemd[1]: Reached target basic.target - Basic System. May 7 23:59:14.124481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 7 23:59:14.136703 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 7 23:59:14.141215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 7 23:59:14.145061 systemd[1]: Mounting sysroot.mount - /sysroot... May 7 23:59:14.191391 kernel: EXT4-fs (vda9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 7 23:59:14.191943 systemd[1]: Mounted sysroot.mount - /sysroot. May 7 23:59:14.193349 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 7 23:59:14.206358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:59:14.208646 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 7 23:59:14.209663 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 7 23:59:14.209704 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 7 23:59:14.209726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:59:14.214416 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 7 23:59:14.217706 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 7 23:59:14.225218 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 7 23:59:14.225803 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:59:14.225826 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:59:14.227043 kernel: BTRFS info (device vda6): using free space tree May 7 23:59:14.229369 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:59:14.230238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:59:14.260469 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 7 23:59:14.264490 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 7 23:59:14.267582 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 7 23:59:14.271738 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 7 23:59:14.341641 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 7 23:59:14.355398 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 7 23:59:14.357886 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 7 23:59:14.363299 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:59:14.379465 ignition[911]: INFO : Ignition 2.20.0 May 7 23:59:14.379465 ignition[911]: INFO : Stage: mount May 7 23:59:14.381045 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:59:14.381045 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:14.381045 ignition[911]: INFO : mount: mount passed May 7 23:59:14.381045 ignition[911]: INFO : Ignition finished successfully May 7 23:59:14.380822 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 7 23:59:14.382145 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 7 23:59:14.392420 systemd[1]: Starting ignition-files.service - Ignition (files)... May 7 23:59:14.947069 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 7 23:59:14.964441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:59:14.970429 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 7 23:59:14.970460 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:59:14.972838 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:59:14.972870 kernel: BTRFS info (device vda6): using free space tree May 7 23:59:14.975291 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:59:14.976219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:59:14.991522 ignition[942]: INFO : Ignition 2.20.0 May 7 23:59:14.992429 ignition[942]: INFO : Stage: files May 7 23:59:14.992429 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:59:14.992429 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:14.995403 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 7 23:59:14.995403 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 7 23:59:14.995403 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 7 23:59:14.999215 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 7 23:59:14.999215 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 7 23:59:14.999215 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 7 23:59:14.999215 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 7 23:59:14.999215 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 7 23:59:14.996215 unknown[942]: wrote ssh authorized keys file for user: core May 7 23:59:15.041460 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 7 23:59:15.325959 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 7 23:59:15.325959 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:59:15.329643 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 7 23:59:15.620561 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 7 23:59:15.686791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:59:15.688733 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 7 23:59:15.855626 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 7 23:59:15.955398 systemd-networkd[759]: eth0: Gained IPv6LL May 7 23:59:16.244688 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:59:16.244688 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 7 23:59:16.248879 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 7 23:59:16.266454 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:59:16.269598 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:59:16.272023 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 7 23:59:16.272023 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 7 23:59:16.272023 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 7 23:59:16.272023 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 7 23:59:16.272023 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 7 23:59:16.272023 ignition[942]: INFO : files: files passed May 7 23:59:16.272023 ignition[942]: INFO : Ignition finished successfully May 7 23:59:16.272440 systemd[1]: Finished ignition-files.service - Ignition (files). May 7 23:59:16.288425 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 7 23:59:16.291065 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 7 23:59:16.292795 systemd[1]: ignition-quench.service: Deactivated successfully. May 7 23:59:16.292874 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 7 23:59:16.298289 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 7 23:59:16.300569 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:59:16.300569 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 7 23:59:16.303457 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:59:16.302952 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:59:16.305563 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 7 23:59:16.315413 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 7 23:59:16.332330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 7 23:59:16.332440 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 7 23:59:16.334562 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 7 23:59:16.336356 systemd[1]: Reached target initrd.target - Initrd Default Target. May 7 23:59:16.338126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 7 23:59:16.338895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 7 23:59:16.353499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:59:16.373436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 7 23:59:16.380842 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 7 23:59:16.382042 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:59:16.384044 systemd[1]: Stopped target timers.target - Timer Units. May 7 23:59:16.385780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 7 23:59:16.385892 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:59:16.388287 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 7 23:59:16.390306 systemd[1]: Stopped target basic.target - Basic System. May 7 23:59:16.391955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 7 23:59:16.393702 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:59:16.395591 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 7 23:59:16.397489 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 7 23:59:16.399266 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:59:16.401196 systemd[1]: Stopped target sysinit.target - System Initialization. May 7 23:59:16.403137 systemd[1]: Stopped target local-fs.target - Local File Systems. May 7 23:59:16.404856 systemd[1]: Stopped target swap.target - Swaps. May 7 23:59:16.406338 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 7 23:59:16.406472 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 7 23:59:16.408758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 7 23:59:16.410655 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:59:16.412524 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 7 23:59:16.413344 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:59:16.414597 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 7 23:59:16.414728 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 7 23:59:16.417531 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 7 23:59:16.417660 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:59:16.419630 systemd[1]: Stopped target paths.target - Path Units. May 7 23:59:16.421157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 7 23:59:16.421308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:59:16.423191 systemd[1]: Stopped target slices.target - Slice Units. May 7 23:59:16.425008 systemd[1]: Stopped target sockets.target - Socket Units. May 7 23:59:16.426533 systemd[1]: iscsid.socket: Deactivated successfully. May 7 23:59:16.426619 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:59:16.428267 systemd[1]: iscsiuio.socket: Deactivated successfully. May 7 23:59:16.428360 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:59:16.430444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 7 23:59:16.430559 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:59:16.432288 systemd[1]: ignition-files.service: Deactivated successfully. May 7 23:59:16.432395 systemd[1]: Stopped ignition-files.service - Ignition (files). May 7 23:59:16.446466 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 7 23:59:16.447370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 7 23:59:16.447518 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:59:16.450657 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 7 23:59:16.452072 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 7 23:59:16.452199 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:59:16.454238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 7 23:59:16.454415 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:59:16.459608 ignition[997]: INFO : Ignition 2.20.0 May 7 23:59:16.459608 ignition[997]: INFO : Stage: umount May 7 23:59:16.459608 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:59:16.459608 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:59:16.465286 ignition[997]: INFO : umount: umount passed May 7 23:59:16.465286 ignition[997]: INFO : Ignition finished successfully May 7 23:59:16.461917 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 7 23:59:16.462389 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 7 23:59:16.462481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 7 23:59:16.464474 systemd[1]: ignition-mount.service: Deactivated successfully. May 7 23:59:16.464557 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 7 23:59:16.467104 systemd[1]: Stopped target network.target - Network. May 7 23:59:16.469137 systemd[1]: ignition-disks.service: Deactivated successfully. May 7 23:59:16.469207 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 7 23:59:16.471090 systemd[1]: ignition-kargs.service: Deactivated successfully. May 7 23:59:16.471140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 7 23:59:16.472855 systemd[1]: ignition-setup.service: Deactivated successfully. May 7 23:59:16.472904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 7 23:59:16.474753 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 7 23:59:16.474797 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 7 23:59:16.476688 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 7 23:59:16.478371 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 7 23:59:16.485129 systemd[1]: systemd-resolved.service: Deactivated successfully. May 7 23:59:16.485252 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 7 23:59:16.488189 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 7 23:59:16.488458 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 7 23:59:16.488501 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:59:16.492112 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 7 23:59:16.493056 systemd[1]: systemd-networkd.service: Deactivated successfully. May 7 23:59:16.493164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 7 23:59:16.496096 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 7 23:59:16.496240 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 7 23:59:16.496267 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 7 23:59:16.512446 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 7 23:59:16.513388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 7 23:59:16.513463 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:59:16.515680 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:59:16.515787 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:59:16.518803 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 7 23:59:16.518851 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 7 23:59:16.520775 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:59:16.524102 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:59:16.536477 systemd[1]: sysroot-boot.service: Deactivated successfully. May 7 23:59:16.536587 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 7 23:59:16.537777 systemd[1]: systemd-udevd.service: Deactivated successfully. May 7 23:59:16.537896 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:59:16.539843 systemd[1]: network-cleanup.service: Deactivated successfully. May 7 23:59:16.539932 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 7 23:59:16.541971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 7 23:59:16.542025 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 7 23:59:16.543171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 7 23:59:16.543204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:59:16.545376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 7 23:59:16.545433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 7 23:59:16.547947 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 7 23:59:16.547993 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 7 23:59:16.550700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:59:16.550744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:59:16.553525 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 7 23:59:16.553568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 7 23:59:16.565415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 7 23:59:16.566452 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 7 23:59:16.566511 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:59:16.570006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:59:16.570052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:59:16.572934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 7 23:59:16.573017 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 7 23:59:16.575095 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 7 23:59:16.577299 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 7 23:59:16.585949 systemd[1]: Switching root. May 7 23:59:16.612326 systemd-journald[238]: Journal stopped May 7 23:59:17.360070 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 7 23:59:17.360123 kernel: SELinux: policy capability network_peer_controls=1 May 7 23:59:17.360135 kernel: SELinux: policy capability open_perms=1 May 7 23:59:17.360145 kernel: SELinux: policy capability extended_socket_class=1 May 7 23:59:17.360154 kernel: SELinux: policy capability always_check_network=0 May 7 23:59:17.360168 kernel: SELinux: policy capability cgroup_seclabel=1 May 7 23:59:17.360177 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 7 23:59:17.360187 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 7 23:59:17.360203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 7 23:59:17.360212 kernel: audit: type=1403 audit(1746662356.777:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 7 23:59:17.360222 systemd[1]: Successfully loaded SELinux policy in 29.960ms. May 7 23:59:17.360239 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.676ms. May 7 23:59:17.360252 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:59:17.360262 systemd[1]: Detected virtualization kvm. May 7 23:59:17.360289 systemd[1]: Detected architecture arm64. May 7 23:59:17.360300 systemd[1]: Detected first boot. May 7 23:59:17.360324 systemd[1]: Initializing machine ID from VM UUID. May 7 23:59:17.360337 zram_generator::config[1044]: No configuration found. May 7 23:59:17.360349 kernel: NET: Registered PF_VSOCK protocol family May 7 23:59:17.360359 systemd[1]: Populated /etc with preset unit settings. May 7 23:59:17.360369 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 7 23:59:17.360380 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 7 23:59:17.360416 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 7 23:59:17.360429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 7 23:59:17.360440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 7 23:59:17.360452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 7 23:59:17.360463 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 7 23:59:17.360473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 7 23:59:17.360483 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 7 23:59:17.360493 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 7 23:59:17.360504 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 7 23:59:17.360514 systemd[1]: Created slice user.slice - User and Session Slice. May 7 23:59:17.360524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:59:17.360536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:59:17.360548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 7 23:59:17.360558 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 7 23:59:17.360568 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 7 23:59:17.360578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:59:17.360588 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 7 23:59:17.360598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:59:17.360609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 7 23:59:17.360626 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 7 23:59:17.360636 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 7 23:59:17.360646 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 7 23:59:17.360657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:59:17.360667 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:59:17.360678 systemd[1]: Reached target slices.target - Slice Units. May 7 23:59:17.360688 systemd[1]: Reached target swap.target - Swaps. May 7 23:59:17.360698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 7 23:59:17.360709 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 7 23:59:17.360719 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 7 23:59:17.360730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:59:17.360741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:59:17.360751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:59:17.360766 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 7 23:59:17.360776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 7 23:59:17.360787 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 7 23:59:17.360797 systemd[1]: Mounting media.mount - External Media Directory... May 7 23:59:17.360807 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 7 23:59:17.360818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 7 23:59:17.360830 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 7 23:59:17.360840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 7 23:59:17.360851 systemd[1]: Reached target machines.target - Containers. May 7 23:59:17.360862 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 7 23:59:17.360872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:59:17.360884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:59:17.360895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 7 23:59:17.360905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:59:17.360917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:59:17.360928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:59:17.360939 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 7 23:59:17.360949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:59:17.360960 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 7 23:59:17.360970 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 7 23:59:17.360981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 7 23:59:17.360992 kernel: fuse: init (API version 7.39) May 7 23:59:17.361004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 7 23:59:17.361014 systemd[1]: Stopped systemd-fsck-usr.service. May 7 23:59:17.361024 kernel: loop: module loaded May 7 23:59:17.361034 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:59:17.361044 kernel: ACPI: bus type drm_connector registered May 7 23:59:17.361054 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:59:17.361064 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:59:17.361074 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 7 23:59:17.361084 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 7 23:59:17.361096 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 7 23:59:17.361124 systemd-journald[1123]: Collecting audit messages is disabled. May 7 23:59:17.361147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:59:17.361158 systemd-journald[1123]: Journal started May 7 23:59:17.361178 systemd-journald[1123]: Runtime Journal (/run/log/journal/e09888859fe247bd9c66a3f2494ba112) is 5.9M, max 47.3M, 41.4M free. May 7 23:59:17.165027 systemd[1]: Queued start job for default target multi-user.target. May 7 23:59:17.175097 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 7 23:59:17.175502 systemd[1]: systemd-journald.service: Deactivated successfully. May 7 23:59:17.364298 systemd[1]: verity-setup.service: Deactivated successfully. May 7 23:59:17.364330 systemd[1]: Stopped verity-setup.service. May 7 23:59:17.369854 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:59:17.370507 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 7 23:59:17.371642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 7 23:59:17.372875 systemd[1]: Mounted media.mount - External Media Directory. May 7 23:59:17.374043 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 7 23:59:17.375246 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 7 23:59:17.376492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 7 23:59:17.377727 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 7 23:59:17.379143 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:59:17.380695 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 7 23:59:17.380857 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 7 23:59:17.382358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:59:17.382527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:59:17.383844 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:59:17.384009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:59:17.385330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:59:17.385495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:59:17.387063 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 7 23:59:17.387214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 7 23:59:17.388553 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:59:17.388707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:59:17.390142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:59:17.391547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 7 23:59:17.393060 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 7 23:59:17.394598 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 7 23:59:17.406835 systemd[1]: Reached target network-pre.target - Preparation for Network. May 7 23:59:17.414352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 7 23:59:17.416316 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 7 23:59:17.417373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 7 23:59:17.417422 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:59:17.419230 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 7 23:59:17.421395 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 7 23:59:17.423440 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 7 23:59:17.424580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:59:17.426019 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 7 23:59:17.427995 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 7 23:59:17.429177 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:59:17.433435 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 7 23:59:17.434476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:59:17.435472 systemd-journald[1123]: Time spent on flushing to /var/log/journal/e09888859fe247bd9c66a3f2494ba112 is 11.048ms for 868 entries. May 7 23:59:17.435472 systemd-journald[1123]: System Journal (/var/log/journal/e09888859fe247bd9c66a3f2494ba112) is 8M, max 195.6M, 187.6M free. May 7 23:59:17.462158 systemd-journald[1123]: Received client request to flush runtime journal. May 7 23:59:17.437168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:59:17.464348 kernel: loop0: detected capacity change from 0 to 123192 May 7 23:59:17.445797 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 7 23:59:17.447945 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 7 23:59:17.452247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:59:17.454869 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 7 23:59:17.456517 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 7 23:59:17.458083 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 7 23:59:17.473576 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 7 23:59:17.479857 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 7 23:59:17.481511 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 7 23:59:17.484738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:59:17.486263 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 7 23:59:17.491379 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 7 23:59:17.492650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 7 23:59:17.500603 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 7 23:59:17.502942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:59:17.505458 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 7 23:59:17.517672 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 7 23:59:17.521299 kernel: loop1: detected capacity change from 0 to 113512 May 7 23:59:17.524185 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 7 23:59:17.524204 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 7 23:59:17.530219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:59:17.560288 kernel: loop2: detected capacity change from 0 to 201592 May 7 23:59:17.612302 kernel: loop3: detected capacity change from 0 to 123192 May 7 23:59:17.619292 kernel: loop4: detected capacity change from 0 to 113512 May 7 23:59:17.625291 kernel: loop5: detected capacity change from 0 to 201592 May 7 23:59:17.631913 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 7 23:59:17.632381 (sd-merge)[1189]: Merged extensions into '/usr'. May 7 23:59:17.635787 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... May 7 23:59:17.635806 systemd[1]: Reloading... May 7 23:59:17.705289 zram_generator::config[1223]: No configuration found. May 7 23:59:17.711961 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 7 23:59:17.784783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:59:17.834141 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 7 23:59:17.834551 systemd[1]: Reloading finished in 198 ms. May 7 23:59:17.855991 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 7 23:59:17.857574 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 7 23:59:17.870548 systemd[1]: Starting ensure-sysext.service... May 7 23:59:17.872265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:59:17.885979 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... May 7 23:59:17.885996 systemd[1]: Reloading... May 7 23:59:17.890096 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 7 23:59:17.890356 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 7 23:59:17.891002 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 7 23:59:17.891218 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 7 23:59:17.891362 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 7 23:59:17.893857 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:59:17.893871 systemd-tmpfiles[1252]: Skipping /boot May 7 23:59:17.903028 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:59:17.903046 systemd-tmpfiles[1252]: Skipping /boot May 7 23:59:17.928297 zram_generator::config[1277]: No configuration found. May 7 23:59:18.006458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:59:18.055974 systemd[1]: Reloading finished in 169 ms. May 7 23:59:18.065765 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 7 23:59:18.067317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:59:18.092939 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:59:18.095437 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 7 23:59:18.097749 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 7 23:59:18.103564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:59:18.107454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:59:18.113764 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 7 23:59:18.117335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:59:18.118695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:59:18.120779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:59:18.124552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:59:18.125579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:59:18.125698 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:59:18.126615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:59:18.128306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:59:18.130110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:59:18.130252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:59:18.131998 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:59:18.132141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:59:18.143748 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 7 23:59:18.149897 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 7 23:59:18.152416 systemd-udevd[1327]: Using default interface naming scheme 'v255'. May 7 23:59:18.155136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:59:18.166678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:59:18.168897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:59:18.174904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:59:18.179524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:59:18.180768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:59:18.180911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:59:18.183900 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 7 23:59:18.184967 augenrules[1373]: No rules May 7 23:59:18.188539 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 7 23:59:18.193539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:59:18.195871 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:59:18.197317 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:59:18.199227 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 7 23:59:18.201797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:59:18.201965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:59:18.204231 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:59:18.205070 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:59:18.207012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:59:18.207244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:59:18.208966 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:59:18.209118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:59:18.213740 systemd[1]: Finished ensure-sysext.service. May 7 23:59:18.219473 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 7 23:59:18.229757 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 7 23:59:18.237472 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:59:18.238728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:59:18.238793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:59:18.242533 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 7 23:59:18.244115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 7 23:59:18.244325 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 7 23:59:18.254302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1369) May 7 23:59:18.293153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:59:18.309340 systemd-resolved[1321]: Positive Trust Anchors: May 7 23:59:18.311475 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 7 23:59:18.313735 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:59:18.313769 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:59:18.322640 systemd-resolved[1321]: Defaulting to hostname 'linux'. May 7 23:59:18.324356 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:59:18.325898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:59:18.334678 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 7 23:59:18.335950 systemd[1]: Reached target time-set.target - System Time Set. May 7 23:59:18.343217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 7 23:59:18.343405 systemd-networkd[1389]: lo: Link UP May 7 23:59:18.343635 systemd-networkd[1389]: lo: Gained carrier May 7 23:59:18.344531 systemd-networkd[1389]: Enumeration completed May 7 23:59:18.344806 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:59:18.346435 systemd[1]: Reached target network.target - Network. May 7 23:59:18.352309 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:59:18.352413 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:59:18.352473 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 7 23:59:18.353064 systemd-networkd[1389]: eth0: Link UP May 7 23:59:18.353133 systemd-networkd[1389]: eth0: Gained carrier May 7 23:59:18.353196 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:59:18.355155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 7 23:59:18.369362 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:59:18.369988 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. May 7 23:59:18.370705 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 7 23:59:18.370758 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-05-07 23:59:18.219751 UTC. May 7 23:59:18.372803 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 7 23:59:18.398494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:59:18.405029 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 7 23:59:18.408023 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 7 23:59:18.424035 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:59:18.434120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:59:18.454690 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 7 23:59:18.456102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:59:18.457227 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:59:18.458378 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 7 23:59:18.459602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 7 23:59:18.460959 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 7 23:59:18.462216 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 7 23:59:18.463453 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 7 23:59:18.464636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 7 23:59:18.464671 systemd[1]: Reached target paths.target - Path Units. May 7 23:59:18.465547 systemd[1]: Reached target timers.target - Timer Units. May 7 23:59:18.467259 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 7 23:59:18.469581 systemd[1]: Starting docker.socket - Docker Socket for the API... May 7 23:59:18.472637 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 7 23:59:18.474052 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 7 23:59:18.475316 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 7 23:59:18.478392 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 7 23:59:18.479732 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 7 23:59:18.481901 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 7 23:59:18.483490 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 7 23:59:18.484606 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:59:18.485561 systemd[1]: Reached target basic.target - Basic System. May 7 23:59:18.486532 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 7 23:59:18.486564 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 7 23:59:18.487406 systemd[1]: Starting containerd.service - containerd container runtime... May 7 23:59:18.489100 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:59:18.491010 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 7 23:59:18.492806 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 7 23:59:18.497478 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 7 23:59:18.499101 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 7 23:59:18.501564 jq[1427]: false May 7 23:59:18.500235 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 7 23:59:18.503913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 7 23:59:18.505965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 7 23:59:18.511184 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 7 23:59:18.516177 systemd[1]: Starting systemd-logind.service - User Login Management... May 7 23:59:18.519060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 7 23:59:18.519157 extend-filesystems[1428]: Found loop3 May 7 23:59:18.519157 extend-filesystems[1428]: Found loop4 May 7 23:59:18.519157 extend-filesystems[1428]: Found loop5 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda May 7 23:59:18.521546 extend-filesystems[1428]: Found vda1 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda2 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda3 May 7 23:59:18.521546 extend-filesystems[1428]: Found usr May 7 23:59:18.521546 extend-filesystems[1428]: Found vda4 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda6 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda7 May 7 23:59:18.521546 extend-filesystems[1428]: Found vda9 May 7 23:59:18.521546 extend-filesystems[1428]: Checking size of /dev/vda9 May 7 23:59:18.519821 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 7 23:59:18.528564 dbus-daemon[1426]: [system] SELinux support is enabled May 7 23:59:18.542795 extend-filesystems[1428]: Resized partition /dev/vda9 May 7 23:59:18.521203 systemd[1]: Starting update-engine.service - Update Engine... May 7 23:59:18.544636 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 7 23:59:18.550672 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 7 23:59:18.524227 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 7 23:59:18.550866 jq[1444]: true May 7 23:59:18.530333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 7 23:59:18.532063 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 7 23:59:18.560343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1366) May 7 23:59:18.540423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 7 23:59:18.540613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 7 23:59:18.540870 systemd[1]: motdgen.service: Deactivated successfully. May 7 23:59:18.541019 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 7 23:59:18.547786 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 7 23:59:18.547949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 7 23:59:18.573317 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 7 23:59:18.567569 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 7 23:59:18.582626 tar[1451]: linux-arm64/LICENSE May 7 23:59:18.582842 update_engine[1443]: I20250507 23:59:18.574231 1443 main.cc:92] Flatcar Update Engine starting May 7 23:59:18.580516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 7 23:59:18.583040 jq[1452]: true May 7 23:59:18.580538 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 7 23:59:18.583230 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 7 23:59:18.583266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 7 23:59:18.586174 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 7 23:59:18.586174 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 7 23:59:18.586174 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 7 23:59:18.590465 extend-filesystems[1428]: Resized filesystem in /dev/vda9 May 7 23:59:18.591806 tar[1451]: linux-arm64/helm May 7 23:59:18.587193 systemd[1]: extend-filesystems.service: Deactivated successfully. May 7 23:59:18.587403 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 7 23:59:18.594226 update_engine[1443]: I20250507 23:59:18.594154 1443 update_check_scheduler.cc:74] Next update check in 5m57s May 7 23:59:18.596606 systemd[1]: Started update-engine.service - Update Engine. May 7 23:59:18.599111 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 7 23:59:18.617230 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (Power Button) May 7 23:59:18.617452 systemd-logind[1439]: New seat seat0. May 7 23:59:18.618078 systemd[1]: Started systemd-logind.service - User Login Management. May 7 23:59:18.653325 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 7 23:59:18.656973 bash[1482]: Updated "/home/core/.ssh/authorized_keys" May 7 23:59:18.658684 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 7 23:59:18.661873 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 7 23:59:18.777831 containerd[1453]: time="2025-05-07T23:59:18.777729440Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 7 23:59:18.807419 containerd[1453]: time="2025-05-07T23:59:18.807200280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.808978 containerd[1453]: time="2025-05-07T23:59:18.808946160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 7 23:59:18.809119 containerd[1453]: time="2025-05-07T23:59:18.809103080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 7 23:59:18.809179 containerd[1453]: time="2025-05-07T23:59:18.809166360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 7 23:59:18.809492 containerd[1453]: time="2025-05-07T23:59:18.809471320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 7 23:59:18.809632 containerd[1453]: time="2025-05-07T23:59:18.809615960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.809808 containerd[1453]: time="2025-05-07T23:59:18.809789000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:59:18.809918 containerd[1453]: time="2025-05-07T23:59:18.809854360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.810335 containerd[1453]: time="2025-05-07T23:59:18.810225360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:59:18.810418 containerd[1453]: time="2025-05-07T23:59:18.810401920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.810526 containerd[1453]: time="2025-05-07T23:59:18.810508920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:59:18.810577 containerd[1453]: time="2025-05-07T23:59:18.810564000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.810759 containerd[1453]: time="2025-05-07T23:59:18.810741680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.811202 containerd[1453]: time="2025-05-07T23:59:18.811182520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 7 23:59:18.811477 containerd[1453]: time="2025-05-07T23:59:18.811455080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:59:18.811594 containerd[1453]: time="2025-05-07T23:59:18.811578760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 7 23:59:18.811800 containerd[1453]: time="2025-05-07T23:59:18.811782200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 7 23:59:18.811972 containerd[1453]: time="2025-05-07T23:59:18.811954480Z" level=info msg="metadata content store policy set" policy=shared May 7 23:59:18.817354 containerd[1453]: time="2025-05-07T23:59:18.817331200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 7 23:59:18.817535 containerd[1453]: time="2025-05-07T23:59:18.817518320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 7 23:59:18.817664 containerd[1453]: time="2025-05-07T23:59:18.817651080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 7 23:59:18.817735 containerd[1453]: time="2025-05-07T23:59:18.817724000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 7 23:59:18.817899 containerd[1453]: time="2025-05-07T23:59:18.817882760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 7 23:59:18.818126 containerd[1453]: time="2025-05-07T23:59:18.818106960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 7 23:59:18.818562 containerd[1453]: time="2025-05-07T23:59:18.818532400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 7 23:59:18.818735 containerd[1453]: time="2025-05-07T23:59:18.818672400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 7 23:59:18.818735 containerd[1453]: time="2025-05-07T23:59:18.818694240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 7 23:59:18.818735 containerd[1453]: time="2025-05-07T23:59:18.818709600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 7 23:59:18.818735 containerd[1453]: time="2025-05-07T23:59:18.818723560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818735 containerd[1453]: time="2025-05-07T23:59:18.818735640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818747840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818760840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818774480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818786600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818799480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818809640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818828280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818842400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818853720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818865960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818877240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818894920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 7 23:59:18.818907 containerd[1453]: time="2025-05-07T23:59:18.818906680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818919480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818932120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818946400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818957320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818968760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818981280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.818994960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819014520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819026880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819037120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819212720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819228960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 7 23:59:18.819485 containerd[1453]: time="2025-05-07T23:59:18.819239720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 7 23:59:18.819910 containerd[1453]: time="2025-05-07T23:59:18.819250760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 7 23:59:18.819910 containerd[1453]: time="2025-05-07T23:59:18.819259040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 7 23:59:18.819910 containerd[1453]: time="2025-05-07T23:59:18.819303760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 7 23:59:18.819910 containerd[1453]: time="2025-05-07T23:59:18.819315720Z" level=info msg="NRI interface is disabled by configuration." May 7 23:59:18.819910 containerd[1453]: time="2025-05-07T23:59:18.819326280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 7 23:59:18.820026 containerd[1453]: time="2025-05-07T23:59:18.819654760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 7 23:59:18.820026 containerd[1453]: time="2025-05-07T23:59:18.819700240Z" level=info msg="Connect containerd service" May 7 23:59:18.820026 containerd[1453]: time="2025-05-07T23:59:18.819729360Z" level=info msg="using legacy CRI server" May 7 23:59:18.820026 containerd[1453]: time="2025-05-07T23:59:18.819736720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 7 23:59:18.820026 containerd[1453]: time="2025-05-07T23:59:18.819954920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 7 23:59:18.820615 containerd[1453]: time="2025-05-07T23:59:18.820551920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:59:18.822755 containerd[1453]: time="2025-05-07T23:59:18.820808080Z" level=info msg="Start subscribing containerd event" May 7 23:59:18.822755 containerd[1453]: time="2025-05-07T23:59:18.820856400Z" level=info msg="Start recovering state" May 7 23:59:18.822755 containerd[1453]: time="2025-05-07T23:59:18.820914240Z" level=info msg="Start event monitor" May 7 23:59:18.822755 containerd[1453]: time="2025-05-07T23:59:18.821047000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 7 23:59:18.822755 containerd[1453]: time="2025-05-07T23:59:18.821085760Z" level=info msg=serving... address=/run/containerd/containerd.sock May 7 23:59:18.823458 containerd[1453]: time="2025-05-07T23:59:18.823366800Z" level=info msg="Start snapshots syncer" May 7 23:59:18.823546 containerd[1453]: time="2025-05-07T23:59:18.823532680Z" level=info msg="Start cni network conf syncer for default" May 7 23:59:18.823671 containerd[1453]: time="2025-05-07T23:59:18.823650480Z" level=info msg="Start streaming server" May 7 23:59:18.824006 containerd[1453]: time="2025-05-07T23:59:18.823982480Z" level=info msg="containerd successfully booted in 0.047962s" May 7 23:59:18.824113 systemd[1]: Started containerd.service - containerd container runtime. May 7 23:59:18.979113 tar[1451]: linux-arm64/README.md May 7 23:59:18.991737 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 7 23:59:19.091189 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 7 23:59:19.108893 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 7 23:59:19.120706 systemd[1]: Starting issuegen.service - Generate /run/issue... May 7 23:59:19.125490 systemd[1]: issuegen.service: Deactivated successfully. May 7 23:59:19.125677 systemd[1]: Finished issuegen.service - Generate /run/issue. May 7 23:59:19.128531 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 7 23:59:19.139949 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 7 23:59:19.151831 systemd[1]: Started getty@tty1.service - Getty on tty1. May 7 23:59:19.154265 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 7 23:59:19.155904 systemd[1]: Reached target getty.target - Login Prompts. May 7 23:59:20.179384 systemd-networkd[1389]: eth0: Gained IPv6LL May 7 23:59:20.181426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 7 23:59:20.183100 systemd[1]: Reached target network-online.target - Network is Online. May 7 23:59:20.201576 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 7 23:59:20.203800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:20.205780 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 7 23:59:20.218997 systemd[1]: coreos-metadata.service: Deactivated successfully. May 7 23:59:20.219217 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 7 23:59:20.220983 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 7 23:59:20.226027 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 7 23:59:20.708067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:20.709651 systemd[1]: Reached target multi-user.target - Multi-User System. May 7 23:59:20.712046 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:59:20.714405 systemd[1]: Startup finished in 536ms (kernel) + 5.083s (initrd) + 3.970s (userspace) = 9.590s. May 7 23:59:21.116022 kubelet[1540]: E0507 23:59:21.115911 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:59:21.118101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:59:21.118242 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:59:21.118568 systemd[1]: kubelet.service: Consumed 787ms CPU time, 250.4M memory peak. May 7 23:59:24.513660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 7 23:59:24.514790 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). May 7 23:59:24.573153 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:24.574781 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:24.582213 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 7 23:59:24.591494 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 7 23:59:24.596731 systemd-logind[1439]: New session 1 of user core. May 7 23:59:24.601299 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 7 23:59:24.603640 systemd[1]: Starting user@500.service - User Manager for UID 500... May 7 23:59:24.609378 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 7 23:59:24.611326 systemd-logind[1439]: New session c1 of user core. May 7 23:59:24.705673 systemd[1557]: Queued start job for default target default.target. May 7 23:59:24.717218 systemd[1557]: Created slice app.slice - User Application Slice. May 7 23:59:24.717244 systemd[1557]: Reached target paths.target - Paths. May 7 23:59:24.717302 systemd[1557]: Reached target timers.target - Timers. May 7 23:59:24.718378 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... May 7 23:59:24.726169 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 7 23:59:24.726227 systemd[1557]: Reached target sockets.target - Sockets. May 7 23:59:24.726298 systemd[1557]: Reached target basic.target - Basic System. May 7 23:59:24.726336 systemd[1557]: Reached target default.target - Main User Target. May 7 23:59:24.726359 systemd[1557]: Startup finished in 110ms. May 7 23:59:24.726459 systemd[1]: Started user@500.service - User Manager for UID 500. May 7 23:59:24.727832 systemd[1]: Started session-1.scope - Session 1 of User core. May 7 23:59:24.786692 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). May 7 23:59:24.830320 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:24.831350 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:24.835331 systemd-logind[1439]: New session 2 of user core. May 7 23:59:24.850468 systemd[1]: Started session-2.scope - Session 2 of User core. May 7 23:59:24.899846 sshd[1570]: Connection closed by 10.0.0.1 port 48550 May 7 23:59:24.900234 sshd-session[1568]: pam_unix(sshd:session): session closed for user core May 7 23:59:24.915009 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:48550.service: Deactivated successfully. May 7 23:59:24.916257 systemd[1]: session-2.scope: Deactivated successfully. May 7 23:59:24.918655 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. May 7 23:59:24.927562 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). May 7 23:59:24.928424 systemd-logind[1439]: Removed session 2. May 7 23:59:24.967969 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:24.968987 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:24.972954 systemd-logind[1439]: New session 3 of user core. May 7 23:59:24.983394 systemd[1]: Started session-3.scope - Session 3 of User core. May 7 23:59:25.029833 sshd[1578]: Connection closed by 10.0.0.1 port 48554 May 7 23:59:25.030175 sshd-session[1575]: pam_unix(sshd:session): session closed for user core May 7 23:59:25.041447 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:48554.service: Deactivated successfully. May 7 23:59:25.043492 systemd[1]: session-3.scope: Deactivated successfully. May 7 23:59:25.044063 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. May 7 23:59:25.052557 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). May 7 23:59:25.053448 systemd-logind[1439]: Removed session 3. May 7 23:59:25.092524 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:25.093555 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:25.097331 systemd-logind[1439]: New session 4 of user core. May 7 23:59:25.111389 systemd[1]: Started session-4.scope - Session 4 of User core. May 7 23:59:25.160341 sshd[1586]: Connection closed by 10.0.0.1 port 48562 May 7 23:59:25.160709 sshd-session[1583]: pam_unix(sshd:session): session closed for user core May 7 23:59:25.176248 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:48562.service: Deactivated successfully. May 7 23:59:25.179463 systemd[1]: session-4.scope: Deactivated successfully. May 7 23:59:25.180056 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. May 7 23:59:25.181706 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). May 7 23:59:25.182430 systemd-logind[1439]: Removed session 4. May 7 23:59:25.224918 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:25.225936 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:25.229686 systemd-logind[1439]: New session 5 of user core. May 7 23:59:25.238466 systemd[1]: Started session-5.scope - Session 5 of User core. May 7 23:59:25.299412 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 7 23:59:25.300255 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:59:25.321020 sudo[1595]: pam_unix(sudo:session): session closed for user root May 7 23:59:25.322317 sshd[1594]: Connection closed by 10.0.0.1 port 48578 May 7 23:59:25.322812 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 7 23:59:25.337089 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:48578.service: Deactivated successfully. May 7 23:59:25.338433 systemd[1]: session-5.scope: Deactivated successfully. May 7 23:59:25.339966 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. May 7 23:59:25.340844 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:48594.service - OpenSSH per-connection server daemon (10.0.0.1:48594). May 7 23:59:25.341541 systemd-logind[1439]: Removed session 5. May 7 23:59:25.384286 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 48594 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:25.385329 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:25.389320 systemd-logind[1439]: New session 6 of user core. May 7 23:59:25.407456 systemd[1]: Started session-6.scope - Session 6 of User core. May 7 23:59:25.457626 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 7 23:59:25.457900 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:59:25.460925 sudo[1605]: pam_unix(sudo:session): session closed for user root May 7 23:59:25.465602 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 7 23:59:25.465867 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:59:25.485650 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:59:25.506551 augenrules[1627]: No rules May 7 23:59:25.507123 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:59:25.507316 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:59:25.508088 sudo[1604]: pam_unix(sudo:session): session closed for user root May 7 23:59:25.509255 sshd[1603]: Connection closed by 10.0.0.1 port 48594 May 7 23:59:25.509671 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 7 23:59:25.520538 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:48594.service: Deactivated successfully. May 7 23:59:25.521827 systemd[1]: session-6.scope: Deactivated successfully. May 7 23:59:25.522481 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. May 7 23:59:25.533620 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:48602.service - OpenSSH per-connection server daemon (10.0.0.1:48602). May 7 23:59:25.534535 systemd-logind[1439]: Removed session 6. May 7 23:59:25.574634 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 48602 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:59:25.576703 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:59:25.580414 systemd-logind[1439]: New session 7 of user core. May 7 23:59:25.593409 systemd[1]: Started session-7.scope - Session 7 of User core. May 7 23:59:25.642814 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 7 23:59:25.643498 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:59:25.997568 systemd[1]: Starting docker.service - Docker Application Container Engine... May 7 23:59:25.997652 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 7 23:59:26.244284 dockerd[1661]: time="2025-05-07T23:59:26.244216485Z" level=info msg="Starting up" May 7 23:59:26.391786 dockerd[1661]: time="2025-05-07T23:59:26.391643458Z" level=info msg="Loading containers: start." May 7 23:59:26.528323 kernel: Initializing XFRM netlink socket May 7 23:59:26.596928 systemd-networkd[1389]: docker0: Link UP May 7 23:59:26.628511 dockerd[1661]: time="2025-05-07T23:59:26.628451332Z" level=info msg="Loading containers: done." May 7 23:59:26.640778 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3923052485-merged.mount: Deactivated successfully. May 7 23:59:26.642813 dockerd[1661]: time="2025-05-07T23:59:26.642708706Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 7 23:59:26.642813 dockerd[1661]: time="2025-05-07T23:59:26.642808283Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 7 23:59:26.643013 dockerd[1661]: time="2025-05-07T23:59:26.642984012Z" level=info msg="Daemon has completed initialization" May 7 23:59:26.670673 dockerd[1661]: time="2025-05-07T23:59:26.670583034Z" level=info msg="API listen on /run/docker.sock" May 7 23:59:26.670798 systemd[1]: Started docker.service - Docker Application Container Engine. May 7 23:59:27.348693 containerd[1453]: time="2025-05-07T23:59:27.348643180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 7 23:59:28.004911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734794630.mount: Deactivated successfully. May 7 23:59:29.394981 containerd[1453]: time="2025-05-07T23:59:29.394873703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:29.395830 containerd[1453]: time="2025-05-07T23:59:29.395555265Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 7 23:59:29.396619 containerd[1453]: time="2025-05-07T23:59:29.396578960Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:29.399541 containerd[1453]: time="2025-05-07T23:59:29.399510502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:29.400772 containerd[1453]: time="2025-05-07T23:59:29.400744349Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.052057324s" May 7 23:59:29.400846 containerd[1453]: time="2025-05-07T23:59:29.400775474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 7 23:59:29.401524 containerd[1453]: time="2025-05-07T23:59:29.401330029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 7 23:59:31.083383 containerd[1453]: time="2025-05-07T23:59:31.083328394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:31.084029 containerd[1453]: time="2025-05-07T23:59:31.083975088Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 7 23:59:31.084769 containerd[1453]: time="2025-05-07T23:59:31.084740051Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:31.088302 containerd[1453]: time="2025-05-07T23:59:31.087702594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:31.088921 containerd[1453]: time="2025-05-07T23:59:31.088887913Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.687530611s" May 7 23:59:31.088921 containerd[1453]: time="2025-05-07T23:59:31.088919553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 7 23:59:31.089534 containerd[1453]: time="2025-05-07T23:59:31.089338236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 7 23:59:31.368672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 7 23:59:31.377446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:31.475550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:31.478985 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:59:31.513842 kubelet[1922]: E0507 23:59:31.513772 1922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:59:31.516536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:59:31.516681 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:59:31.517161 systemd[1]: kubelet.service: Consumed 129ms CPU time, 105.1M memory peak. May 7 23:59:32.563571 containerd[1453]: time="2025-05-07T23:59:32.563504787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:32.567085 containerd[1453]: time="2025-05-07T23:59:32.566832808Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 7 23:59:32.567979 containerd[1453]: time="2025-05-07T23:59:32.567915278Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:32.570674 containerd[1453]: time="2025-05-07T23:59:32.570632775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:32.571777 containerd[1453]: time="2025-05-07T23:59:32.571740560Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.48237335s" May 7 23:59:32.571826 containerd[1453]: time="2025-05-07T23:59:32.571776361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 7 23:59:32.572331 containerd[1453]: time="2025-05-07T23:59:32.572265928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 7 23:59:33.818684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1137271666.mount: Deactivated successfully. May 7 23:59:34.025236 containerd[1453]: time="2025-05-07T23:59:34.025186748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:34.025714 containerd[1453]: time="2025-05-07T23:59:34.025676259Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 7 23:59:34.026221 containerd[1453]: time="2025-05-07T23:59:34.026193580Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:34.028300 containerd[1453]: time="2025-05-07T23:59:34.028251968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:34.028949 containerd[1453]: time="2025-05-07T23:59:34.028819759Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.456511646s" May 7 23:59:34.028949 containerd[1453]: time="2025-05-07T23:59:34.028850999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 7 23:59:34.029584 containerd[1453]: time="2025-05-07T23:59:34.029385276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 7 23:59:34.597488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232087611.mount: Deactivated successfully. May 7 23:59:35.982183 containerd[1453]: time="2025-05-07T23:59:35.981912314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:35.983064 containerd[1453]: time="2025-05-07T23:59:35.982778661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 7 23:59:35.983986 containerd[1453]: time="2025-05-07T23:59:35.983931767Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:35.987058 containerd[1453]: time="2025-05-07T23:59:35.986999960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:35.988397 containerd[1453]: time="2025-05-07T23:59:35.988309198Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.958889526s" May 7 23:59:35.988397 containerd[1453]: time="2025-05-07T23:59:35.988342843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 7 23:59:35.988781 containerd[1453]: time="2025-05-07T23:59:35.988751411Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 7 23:59:36.423180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509017620.mount: Deactivated successfully. May 7 23:59:36.427201 containerd[1453]: time="2025-05-07T23:59:36.427162212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:36.428065 containerd[1453]: time="2025-05-07T23:59:36.427901808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 7 23:59:36.428832 containerd[1453]: time="2025-05-07T23:59:36.428814706Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:36.431082 containerd[1453]: time="2025-05-07T23:59:36.431034532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:36.431786 containerd[1453]: time="2025-05-07T23:59:36.431764627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.984997ms" May 7 23:59:36.431837 containerd[1453]: time="2025-05-07T23:59:36.431788780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 7 23:59:36.432258 containerd[1453]: time="2025-05-07T23:59:36.432228960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 7 23:59:36.963473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018711240.mount: Deactivated successfully. May 7 23:59:40.537033 containerd[1453]: time="2025-05-07T23:59:40.536981210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:40.542341 containerd[1453]: time="2025-05-07T23:59:40.542058405Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 7 23:59:40.547188 containerd[1453]: time="2025-05-07T23:59:40.547146826Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:40.554808 containerd[1453]: time="2025-05-07T23:59:40.554767512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:59:40.555947 containerd[1453]: time="2025-05-07T23:59:40.555909366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.123652694s" May 7 23:59:40.555983 containerd[1453]: time="2025-05-07T23:59:40.555943727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 7 23:59:41.521538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 7 23:59:41.531417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:41.626381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:41.629802 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:59:41.662100 kubelet[2086]: E0507 23:59:41.662059 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:59:41.664633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:59:41.664779 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:59:41.666394 systemd[1]: kubelet.service: Consumed 117ms CPU time, 104.3M memory peak. May 7 23:59:45.524758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:45.525036 systemd[1]: kubelet.service: Consumed 117ms CPU time, 104.3M memory peak. May 7 23:59:45.535513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:45.556223 systemd[1]: Reload requested from client PID 2102 ('systemctl') (unit session-7.scope)... May 7 23:59:45.556238 systemd[1]: Reloading... May 7 23:59:45.621304 zram_generator::config[2149]: No configuration found. May 7 23:59:45.767191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:59:45.840304 systemd[1]: Reloading finished in 283 ms. May 7 23:59:45.874419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:45.877695 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:59:45.878099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:45.878553 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:59:45.878737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:45.878784 systemd[1]: kubelet.service: Consumed 81ms CPU time, 90.1M memory peak. May 7 23:59:45.881659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:45.973885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:45.977737 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:59:46.009346 kubelet[2194]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:59:46.009346 kubelet[2194]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 7 23:59:46.009346 kubelet[2194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:59:46.009632 kubelet[2194]: I0507 23:59:46.009415 2194 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:59:46.658196 kubelet[2194]: I0507 23:59:46.658162 2194 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 7 23:59:46.658791 kubelet[2194]: I0507 23:59:46.658421 2194 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:59:46.659110 kubelet[2194]: I0507 23:59:46.659090 2194 server.go:954] "Client rotation is on, will bootstrap in background" May 7 23:59:46.702440 kubelet[2194]: E0507 23:59:46.702409 2194 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:46.703888 kubelet[2194]: I0507 23:59:46.703859 2194 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:59:46.711454 kubelet[2194]: E0507 23:59:46.711418 2194 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:59:46.711454 kubelet[2194]: I0507 23:59:46.711452 2194 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:59:46.714133 kubelet[2194]: I0507 23:59:46.714103 2194 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:59:46.714814 kubelet[2194]: I0507 23:59:46.714773 2194 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:59:46.715002 kubelet[2194]: I0507 23:59:46.714811 2194 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:59:46.715092 kubelet[2194]: I0507 23:59:46.715071 2194 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:59:46.715092 kubelet[2194]: I0507 23:59:46.715080 2194 container_manager_linux.go:304] "Creating device plugin manager" May 7 23:59:46.715288 kubelet[2194]: I0507 23:59:46.715252 2194 state_mem.go:36] "Initialized new in-memory state store" May 7 23:59:46.721246 kubelet[2194]: I0507 23:59:46.721223 2194 kubelet.go:446] "Attempting to sync node with API server" May 7 23:59:46.721284 kubelet[2194]: I0507 23:59:46.721248 2194 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:59:46.721284 kubelet[2194]: I0507 23:59:46.721265 2194 kubelet.go:352] "Adding apiserver pod source" May 7 23:59:46.721284 kubelet[2194]: I0507 23:59:46.721282 2194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:59:46.724209 kubelet[2194]: W0507 23:59:46.724162 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:46.724294 kubelet[2194]: E0507 23:59:46.724217 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:46.724659 kubelet[2194]: W0507 23:59:46.724622 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:46.724693 kubelet[2194]: E0507 23:59:46.724663 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:46.726462 kubelet[2194]: I0507 23:59:46.726414 2194 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:59:46.730331 kubelet[2194]: I0507 23:59:46.728822 2194 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:59:46.730331 kubelet[2194]: W0507 23:59:46.728944 2194 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 7 23:59:46.730331 kubelet[2194]: I0507 23:59:46.729770 2194 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 7 23:59:46.730331 kubelet[2194]: I0507 23:59:46.729796 2194 server.go:1287] "Started kubelet" May 7 23:59:46.730331 kubelet[2194]: I0507 23:59:46.730220 2194 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:59:46.736296 kubelet[2194]: I0507 23:59:46.733538 2194 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:59:46.736296 kubelet[2194]: I0507 23:59:46.734071 2194 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:59:46.736296 kubelet[2194]: I0507 23:59:46.734124 2194 server.go:490] "Adding debug handlers to kubelet server" May 7 23:59:46.736296 kubelet[2194]: I0507 23:59:46.734328 2194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:59:46.736296 kubelet[2194]: I0507 23:59:46.734550 2194 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:59:46.736485 kubelet[2194]: E0507 23:59:46.736260 2194 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:59:46.736554 kubelet[2194]: I0507 23:59:46.736544 2194 volume_manager.go:297] "Starting Kubelet Volume Manager" May 7 23:59:46.736616 kubelet[2194]: E0507 23:59:46.734165 2194 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d642fbf907dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-07 23:59:46.729778686 +0000 UTC m=+0.749226635,LastTimestamp:2025-05-07 23:59:46.729778686 +0000 UTC m=+0.749226635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 7 23:59:46.736846 kubelet[2194]: I0507 23:59:46.736818 2194 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:59:46.736925 kubelet[2194]: E0507 23:59:46.736889 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" May 7 23:59:46.737015 kubelet[2194]: E0507 23:59:46.736996 2194 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:59:46.737325 kubelet[2194]: W0507 23:59:46.737083 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:46.737325 kubelet[2194]: E0507 23:59:46.737116 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:46.737378 kubelet[2194]: I0507 23:59:46.737365 2194 factory.go:221] Registration of the systemd container factory successfully May 7 23:59:46.737436 kubelet[2194]: I0507 23:59:46.737424 2194 reconciler.go:26] "Reconciler: start to sync state" May 7 23:59:46.737486 kubelet[2194]: I0507 23:59:46.737446 2194 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:59:46.739149 kubelet[2194]: I0507 23:59:46.739123 2194 factory.go:221] Registration of the containerd container factory successfully May 7 23:59:46.747193 kubelet[2194]: I0507 23:59:46.747133 2194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:59:46.748252 kubelet[2194]: I0507 23:59:46.748225 2194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:59:46.748252 kubelet[2194]: I0507 23:59:46.748247 2194 status_manager.go:227] "Starting to sync pod status with apiserver" May 7 23:59:46.748368 kubelet[2194]: I0507 23:59:46.748263 2194 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 7 23:59:46.748368 kubelet[2194]: I0507 23:59:46.748278 2194 kubelet.go:2388] "Starting kubelet main sync loop" May 7 23:59:46.748368 kubelet[2194]: E0507 23:59:46.748319 2194 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:59:46.750776 kubelet[2194]: W0507 23:59:46.750675 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:46.750776 kubelet[2194]: E0507 23:59:46.750717 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:46.750867 kubelet[2194]: I0507 23:59:46.750783 2194 cpu_manager.go:221] "Starting CPU manager" policy="none" May 7 23:59:46.750867 kubelet[2194]: I0507 23:59:46.750792 2194 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 7 23:59:46.750867 kubelet[2194]: I0507 23:59:46.750806 2194 state_mem.go:36] "Initialized new in-memory state store" May 7 23:59:46.836922 kubelet[2194]: E0507 23:59:46.836865 2194 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:59:46.849226 kubelet[2194]: E0507 23:59:46.849199 2194 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 7 23:59:46.866200 kubelet[2194]: I0507 23:59:46.866165 2194 policy_none.go:49] "None policy: Start" May 7 23:59:46.866261 kubelet[2194]: I0507 23:59:46.866206 2194 memory_manager.go:186] "Starting memorymanager" policy="None" May 7 23:59:46.866261 kubelet[2194]: I0507 23:59:46.866219 2194 state_mem.go:35] "Initializing new in-memory state store" May 7 23:59:46.871283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 7 23:59:46.886156 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 7 23:59:46.889112 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 7 23:59:46.900483 kubelet[2194]: I0507 23:59:46.900365 2194 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:59:46.900595 kubelet[2194]: I0507 23:59:46.900565 2194 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:59:46.900636 kubelet[2194]: I0507 23:59:46.900579 2194 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:59:46.900940 kubelet[2194]: I0507 23:59:46.900910 2194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:59:46.901913 kubelet[2194]: E0507 23:59:46.901894 2194 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 7 23:59:46.902350 kubelet[2194]: E0507 23:59:46.902330 2194 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 7 23:59:46.937550 kubelet[2194]: E0507 23:59:46.937458 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" May 7 23:59:47.002027 kubelet[2194]: I0507 23:59:47.001996 2194 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:47.002441 kubelet[2194]: E0507 23:59:47.002407 2194 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 7 23:59:47.058560 systemd[1]: Created slice kubepods-burstable-pod0c324094b8a020c96c5f6c5061a5b8f7.slice - libcontainer container kubepods-burstable-pod0c324094b8a020c96c5f6c5061a5b8f7.slice. May 7 23:59:47.079024 kubelet[2194]: E0507 23:59:47.078990 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:47.081471 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 7 23:59:47.082836 kubelet[2194]: E0507 23:59:47.082808 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:47.084187 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 7 23:59:47.085402 kubelet[2194]: E0507 23:59:47.085380 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:47.139802 kubelet[2194]: I0507 23:59:47.139751 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 7 23:59:47.139802 kubelet[2194]: I0507 23:59:47.139787 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:47.139802 kubelet[2194]: I0507 23:59:47.139810 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:47.139955 kubelet[2194]: I0507 23:59:47.139827 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:47.139955 kubelet[2194]: I0507 23:59:47.139845 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:47.139955 kubelet[2194]: I0507 23:59:47.139863 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:47.139955 kubelet[2194]: I0507 23:59:47.139880 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:47.139955 kubelet[2194]: I0507 23:59:47.139895 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:47.140061 kubelet[2194]: I0507 23:59:47.139931 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:47.203988 kubelet[2194]: I0507 23:59:47.203850 2194 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:47.204352 kubelet[2194]: E0507 23:59:47.204236 2194 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 7 23:59:47.337934 kubelet[2194]: E0507 23:59:47.337872 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" May 7 23:59:47.380980 containerd[1453]: time="2025-05-07T23:59:47.380916813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c324094b8a020c96c5f6c5061a5b8f7,Namespace:kube-system,Attempt:0,}" May 7 23:59:47.383553 containerd[1453]: time="2025-05-07T23:59:47.383515047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 7 23:59:47.386102 containerd[1453]: time="2025-05-07T23:59:47.386067981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 7 23:59:47.597258 kubelet[2194]: W0507 23:59:47.597196 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:47.597258 kubelet[2194]: E0507 23:59:47.597237 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:47.605256 kubelet[2194]: I0507 23:59:47.605220 2194 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:47.605519 kubelet[2194]: E0507 23:59:47.605485 2194 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 7 23:59:47.855628 kubelet[2194]: W0507 23:59:47.855498 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:47.855628 kubelet[2194]: E0507 23:59:47.855561 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:47.920195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189220243.mount: Deactivated successfully. May 7 23:59:47.923808 containerd[1453]: time="2025-05-07T23:59:47.923761586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:59:47.926293 containerd[1453]: time="2025-05-07T23:59:47.926228000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 7 23:59:47.928662 containerd[1453]: time="2025-05-07T23:59:47.928625564Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:59:47.930288 containerd[1453]: time="2025-05-07T23:59:47.930067197Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:59:47.931091 containerd[1453]: time="2025-05-07T23:59:47.931061631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:59:47.932073 containerd[1453]: time="2025-05-07T23:59:47.931981058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:59:47.932849 containerd[1453]: time="2025-05-07T23:59:47.932777141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:59:47.933622 containerd[1453]: time="2025-05-07T23:59:47.933537480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:59:47.935437 containerd[1453]: time="2025-05-07T23:59:47.935332634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.207559ms" May 7 23:59:47.938832 containerd[1453]: time="2025-05-07T23:59:47.938594930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.599552ms" May 7 23:59:47.939426 containerd[1453]: time="2025-05-07T23:59:47.939401608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.824429ms" May 7 23:59:48.088681 containerd[1453]: time="2025-05-07T23:59:48.088445501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:48.088681 containerd[1453]: time="2025-05-07T23:59:48.088520631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:48.088681 containerd[1453]: time="2025-05-07T23:59:48.088536825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.088681 containerd[1453]: time="2025-05-07T23:59:48.088619392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.091253 containerd[1453]: time="2025-05-07T23:59:48.091051557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:48.091253 containerd[1453]: time="2025-05-07T23:59:48.091104577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:48.091253 containerd[1453]: time="2025-05-07T23:59:48.091120410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.091253 containerd[1453]: time="2025-05-07T23:59:48.091190863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.091914 containerd[1453]: time="2025-05-07T23:59:48.091829372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:48.091914 containerd[1453]: time="2025-05-07T23:59:48.091898505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:48.091914 containerd[1453]: time="2025-05-07T23:59:48.091910420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.092016 containerd[1453]: time="2025-05-07T23:59:48.091978753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:48.112755 systemd[1]: Started cri-containerd-4292421b2f4ff9a534aa84d23c0d5f7d68d5671c699868d4fe3d7762c01dbc02.scope - libcontainer container 4292421b2f4ff9a534aa84d23c0d5f7d68d5671c699868d4fe3d7762c01dbc02. May 7 23:59:48.117209 systemd[1]: Started cri-containerd-3426ab4d29d77c7c6f03b5c8cb639b81e2277179c4ca31d09d3253acd561e0d0.scope - libcontainer container 3426ab4d29d77c7c6f03b5c8cb639b81e2277179c4ca31d09d3253acd561e0d0. May 7 23:59:48.118494 systemd[1]: Started cri-containerd-c47799c18c346a674b9a953d60fa96dfd889689cd77e058f7ed120c23e59cb99.scope - libcontainer container c47799c18c346a674b9a953d60fa96dfd889689cd77e058f7ed120c23e59cb99. May 7 23:59:48.138695 kubelet[2194]: E0507 23:59:48.138575 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" May 7 23:59:48.145375 containerd[1453]: time="2025-05-07T23:59:48.144955634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"4292421b2f4ff9a534aa84d23c0d5f7d68d5671c699868d4fe3d7762c01dbc02\"" May 7 23:59:48.148133 containerd[1453]: time="2025-05-07T23:59:48.148102039Z" level=info msg="CreateContainer within sandbox \"4292421b2f4ff9a534aa84d23c0d5f7d68d5671c699868d4fe3d7762c01dbc02\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 7 23:59:48.153763 containerd[1453]: time="2025-05-07T23:59:48.153722552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47799c18c346a674b9a953d60fa96dfd889689cd77e058f7ed120c23e59cb99\"" May 7 23:59:48.153844 containerd[1453]: time="2025-05-07T23:59:48.153820554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c324094b8a020c96c5f6c5061a5b8f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3426ab4d29d77c7c6f03b5c8cb639b81e2277179c4ca31d09d3253acd561e0d0\"" May 7 23:59:48.156978 containerd[1453]: time="2025-05-07T23:59:48.156579311Z" level=info msg="CreateContainer within sandbox \"3426ab4d29d77c7c6f03b5c8cb639b81e2277179c4ca31d09d3253acd561e0d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 7 23:59:48.157264 containerd[1453]: time="2025-05-07T23:59:48.157242810Z" level=info msg="CreateContainer within sandbox \"c47799c18c346a674b9a953d60fa96dfd889689cd77e058f7ed120c23e59cb99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 7 23:59:48.160162 containerd[1453]: time="2025-05-07T23:59:48.160131156Z" level=info msg="CreateContainer within sandbox \"4292421b2f4ff9a534aa84d23c0d5f7d68d5671c699868d4fe3d7762c01dbc02\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e8aea2c6141ad6844f615ff237a65f525d959ce12e064a1e5168ac8fa6bacf3\"" May 7 23:59:48.160644 containerd[1453]: time="2025-05-07T23:59:48.160619524Z" level=info msg="StartContainer for \"2e8aea2c6141ad6844f615ff237a65f525d959ce12e064a1e5168ac8fa6bacf3\"" May 7 23:59:48.172719 containerd[1453]: time="2025-05-07T23:59:48.172644363Z" level=info msg="CreateContainer within sandbox \"3426ab4d29d77c7c6f03b5c8cb639b81e2277179c4ca31d09d3253acd561e0d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5db0123ffa350b045c7f1cb1c3cd9ea57498112f0bac1593e2c5b0b2d6a6650\"" May 7 23:59:48.173572 containerd[1453]: time="2025-05-07T23:59:48.173111940Z" level=info msg="StartContainer for \"b5db0123ffa350b045c7f1cb1c3cd9ea57498112f0bac1593e2c5b0b2d6a6650\"" May 7 23:59:48.173572 containerd[1453]: time="2025-05-07T23:59:48.173222336Z" level=info msg="CreateContainer within sandbox \"c47799c18c346a674b9a953d60fa96dfd889689cd77e058f7ed120c23e59cb99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58874090ea857f153dd8250d44e1fe5000c34f54ffc585e9c250f0db28cda75f\"" May 7 23:59:48.174292 containerd[1453]: time="2025-05-07T23:59:48.173820422Z" level=info msg="StartContainer for \"58874090ea857f153dd8250d44e1fe5000c34f54ffc585e9c250f0db28cda75f\"" May 7 23:59:48.185420 systemd[1]: Started cri-containerd-2e8aea2c6141ad6844f615ff237a65f525d959ce12e064a1e5168ac8fa6bacf3.scope - libcontainer container 2e8aea2c6141ad6844f615ff237a65f525d959ce12e064a1e5168ac8fa6bacf3. May 7 23:59:48.190819 kubelet[2194]: W0507 23:59:48.190678 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:48.190819 kubelet[2194]: E0507 23:59:48.190766 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:48.206485 systemd[1]: Started cri-containerd-58874090ea857f153dd8250d44e1fe5000c34f54ffc585e9c250f0db28cda75f.scope - libcontainer container 58874090ea857f153dd8250d44e1fe5000c34f54ffc585e9c250f0db28cda75f. May 7 23:59:48.207718 systemd[1]: Started cri-containerd-b5db0123ffa350b045c7f1cb1c3cd9ea57498112f0bac1593e2c5b0b2d6a6650.scope - libcontainer container b5db0123ffa350b045c7f1cb1c3cd9ea57498112f0bac1593e2c5b0b2d6a6650. May 7 23:59:48.237593 containerd[1453]: time="2025-05-07T23:59:48.237426969Z" level=info msg="StartContainer for \"2e8aea2c6141ad6844f615ff237a65f525d959ce12e064a1e5168ac8fa6bacf3\" returns successfully" May 7 23:59:48.257315 containerd[1453]: time="2025-05-07T23:59:48.257146707Z" level=info msg="StartContainer for \"58874090ea857f153dd8250d44e1fe5000c34f54ffc585e9c250f0db28cda75f\" returns successfully" May 7 23:59:48.257315 containerd[1453]: time="2025-05-07T23:59:48.257213241Z" level=info msg="StartContainer for \"b5db0123ffa350b045c7f1cb1c3cd9ea57498112f0bac1593e2c5b0b2d6a6650\" returns successfully" May 7 23:59:48.274096 kubelet[2194]: W0507 23:59:48.272312 2194 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 7 23:59:48.274096 kubelet[2194]: E0507 23:59:48.272389 2194 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 7 23:59:48.406995 kubelet[2194]: I0507 23:59:48.406883 2194 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:48.407253 kubelet[2194]: E0507 23:59:48.407218 2194 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 7 23:59:48.760021 kubelet[2194]: E0507 23:59:48.759915 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:48.763834 kubelet[2194]: E0507 23:59:48.763805 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:48.766448 kubelet[2194]: E0507 23:59:48.766142 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:49.742176 kubelet[2194]: E0507 23:59:49.742124 2194 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 7 23:59:49.769283 kubelet[2194]: E0507 23:59:49.768857 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:49.769283 kubelet[2194]: E0507 23:59:49.769222 2194 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:59:50.009311 kubelet[2194]: I0507 23:59:50.009174 2194 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:50.018241 kubelet[2194]: I0507 23:59:50.018211 2194 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 7 23:59:50.018241 kubelet[2194]: E0507 23:59:50.018242 2194 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 7 23:59:50.036902 kubelet[2194]: I0507 23:59:50.036875 2194 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:59:50.042753 kubelet[2194]: E0507 23:59:50.042715 2194 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 7 23:59:50.042850 kubelet[2194]: I0507 23:59:50.042838 2194 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 7 23:59:50.044700 kubelet[2194]: E0507 23:59:50.044523 2194 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 7 23:59:50.044700 kubelet[2194]: I0507 23:59:50.044543 2194 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:59:50.046048 kubelet[2194]: E0507 23:59:50.046026 2194 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 7 23:59:50.730967 kubelet[2194]: I0507 23:59:50.730900 2194 apiserver.go:52] "Watching apiserver" May 7 23:59:50.737867 kubelet[2194]: I0507 23:59:50.737827 2194 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:59:51.945951 systemd[1]: Reload requested from client PID 2464 ('systemctl') (unit session-7.scope)... May 7 23:59:51.945967 systemd[1]: Reloading... May 7 23:59:52.008306 zram_generator::config[2508]: No configuration found. May 7 23:59:52.096392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:59:52.182162 systemd[1]: Reloading finished in 235 ms. May 7 23:59:52.200817 kubelet[2194]: I0507 23:59:52.200721 2194 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:59:52.200905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:52.221068 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:59:52.221330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:52.221385 systemd[1]: kubelet.service: Consumed 1.124s CPU time, 127.6M memory peak. May 7 23:59:52.230521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:59:52.329710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:59:52.334389 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:59:52.371742 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:59:52.371742 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 7 23:59:52.371742 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:59:52.372056 kubelet[2550]: I0507 23:59:52.371793 2550 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:59:52.377941 kubelet[2550]: I0507 23:59:52.377907 2550 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 7 23:59:52.377941 kubelet[2550]: I0507 23:59:52.377933 2550 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:59:52.378175 kubelet[2550]: I0507 23:59:52.378151 2550 server.go:954] "Client rotation is on, will bootstrap in background" May 7 23:59:52.379344 kubelet[2550]: I0507 23:59:52.379323 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 7 23:59:52.381326 kubelet[2550]: I0507 23:59:52.381300 2550 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:59:52.384014 kubelet[2550]: E0507 23:59:52.383982 2550 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:59:52.384014 kubelet[2550]: I0507 23:59:52.384009 2550 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:59:52.387859 kubelet[2550]: I0507 23:59:52.386683 2550 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:59:52.387859 kubelet[2550]: I0507 23:59:52.386883 2550 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:59:52.387859 kubelet[2550]: I0507 23:59:52.386906 2550 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:59:52.387859 kubelet[2550]: I0507 23:59:52.387164 2550 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387174 2550 container_manager_linux.go:304] "Creating device plugin manager" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387220 2550 state_mem.go:36] "Initialized new in-memory state store" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387372 2550 kubelet.go:446] "Attempting to sync node with API server" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387383 2550 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387403 2550 kubelet.go:352] "Adding apiserver pod source" May 7 23:59:52.388056 kubelet[2550]: I0507 23:59:52.387412 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:59:52.388516 kubelet[2550]: I0507 23:59:52.388486 2550 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:59:52.392376 kubelet[2550]: I0507 23:59:52.392319 2550 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:59:52.392975 kubelet[2550]: I0507 23:59:52.392728 2550 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 7 23:59:52.392975 kubelet[2550]: I0507 23:59:52.392759 2550 server.go:1287] "Started kubelet" May 7 23:59:52.392975 kubelet[2550]: I0507 23:59:52.392839 2550 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:59:52.393048 kubelet[2550]: I0507 23:59:52.393013 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:59:52.393496 kubelet[2550]: I0507 23:59:52.393229 2550 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:59:52.393831 kubelet[2550]: I0507 23:59:52.393707 2550 server.go:490] "Adding debug handlers to kubelet server" May 7 23:59:52.396409 kubelet[2550]: I0507 23:59:52.396333 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:59:52.397382 kubelet[2550]: I0507 23:59:52.397357 2550 volume_manager.go:297] "Starting Kubelet Volume Manager" May 7 23:59:52.397522 kubelet[2550]: I0507 23:59:52.397502 2550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:59:52.398513 kubelet[2550]: I0507 23:59:52.398494 2550 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:59:52.398633 kubelet[2550]: I0507 23:59:52.398612 2550 reconciler.go:26] "Reconciler: start to sync state" May 7 23:59:52.402631 kubelet[2550]: E0507 23:59:52.399690 2550 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:59:52.406977 kubelet[2550]: I0507 23:59:52.406387 2550 factory.go:221] Registration of the systemd container factory successfully May 7 23:59:52.406977 kubelet[2550]: I0507 23:59:52.406492 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:59:52.410656 kubelet[2550]: E0507 23:59:52.410623 2550 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:59:52.412041 kubelet[2550]: I0507 23:59:52.412015 2550 factory.go:221] Registration of the containerd container factory successfully May 7 23:59:52.415417 kubelet[2550]: I0507 23:59:52.415366 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:59:52.417320 kubelet[2550]: I0507 23:59:52.417010 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:59:52.417320 kubelet[2550]: I0507 23:59:52.417033 2550 status_manager.go:227] "Starting to sync pod status with apiserver" May 7 23:59:52.417320 kubelet[2550]: I0507 23:59:52.417058 2550 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 7 23:59:52.417320 kubelet[2550]: I0507 23:59:52.417066 2550 kubelet.go:2388] "Starting kubelet main sync loop" May 7 23:59:52.417320 kubelet[2550]: E0507 23:59:52.417103 2550 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:59:52.443793 kubelet[2550]: I0507 23:59:52.443770 2550 cpu_manager.go:221] "Starting CPU manager" policy="none" May 7 23:59:52.443793 kubelet[2550]: I0507 23:59:52.443784 2550 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 7 23:59:52.443793 kubelet[2550]: I0507 23:59:52.443802 2550 state_mem.go:36] "Initialized new in-memory state store" May 7 23:59:52.443956 kubelet[2550]: I0507 23:59:52.443940 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 7 23:59:52.443987 kubelet[2550]: I0507 23:59:52.443955 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 7 23:59:52.443987 kubelet[2550]: I0507 23:59:52.443972 2550 policy_none.go:49] "None policy: Start" May 7 23:59:52.443987 kubelet[2550]: I0507 23:59:52.443979 2550 memory_manager.go:186] "Starting memorymanager" policy="None" May 7 23:59:52.443987 kubelet[2550]: I0507 23:59:52.443987 2550 state_mem.go:35] "Initializing new in-memory state store" May 7 23:59:52.444085 kubelet[2550]: I0507 23:59:52.444074 2550 state_mem.go:75] "Updated machine memory state" May 7 23:59:52.447952 kubelet[2550]: I0507 23:59:52.447623 2550 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:59:52.447952 kubelet[2550]: I0507 23:59:52.447784 2550 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:59:52.447952 kubelet[2550]: I0507 23:59:52.447797 2550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:59:52.448603 kubelet[2550]: I0507 23:59:52.448587 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:59:52.448979 kubelet[2550]: E0507 23:59:52.448955 2550 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 7 23:59:52.517766 kubelet[2550]: I0507 23:59:52.517669 2550 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.517766 kubelet[2550]: I0507 23:59:52.517669 2550 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:59:52.518436 kubelet[2550]: I0507 23:59:52.517680 2550 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:59:52.549596 kubelet[2550]: I0507 23:59:52.549527 2550 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:59:52.556140 kubelet[2550]: I0507 23:59:52.556116 2550 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 7 23:59:52.556221 kubelet[2550]: I0507 23:59:52.556180 2550 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 7 23:59:52.699965 kubelet[2550]: I0507 23:59:52.699914 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.699965 kubelet[2550]: I0507 23:59:52.699954 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:52.699965 kubelet[2550]: I0507 23:59:52.699972 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:52.700180 kubelet[2550]: I0507 23:59:52.699988 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.700180 kubelet[2550]: I0507 23:59:52.700010 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.700180 kubelet[2550]: I0507 23:59:52.700027 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.700180 kubelet[2550]: I0507 23:59:52.700066 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 7 23:59:52.700180 kubelet[2550]: I0507 23:59:52.700103 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:59:52.700316 kubelet[2550]: I0507 23:59:52.700123 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 7 23:59:52.945817 sudo[2583]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 7 23:59:52.946096 sudo[2583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 7 23:59:53.368766 sudo[2583]: pam_unix(sudo:session): session closed for user root May 7 23:59:53.389423 kubelet[2550]: I0507 23:59:53.389354 2550 apiserver.go:52] "Watching apiserver" May 7 23:59:53.399022 kubelet[2550]: I0507 23:59:53.398990 2550 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:59:53.429512 kubelet[2550]: I0507 23:59:53.429483 2550 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:59:53.429879 kubelet[2550]: I0507 23:59:53.429723 2550 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:59:53.439834 kubelet[2550]: E0507 23:59:53.439794 2550 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 7 23:59:53.439935 kubelet[2550]: E0507 23:59:53.439863 2550 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 7 23:59:53.453905 kubelet[2550]: I0507 23:59:53.453831 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4538153440000001 podStartE2EDuration="1.453815344s" podCreationTimestamp="2025-05-07 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:59:53.447111532 +0000 UTC m=+1.109438861" watchObservedRunningTime="2025-05-07 23:59:53.453815344 +0000 UTC m=+1.116142673" May 7 23:59:53.461978 kubelet[2550]: I0507 23:59:53.461919 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.461905046 podStartE2EDuration="1.461905046s" podCreationTimestamp="2025-05-07 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:59:53.461534534 +0000 UTC m=+1.123861863" watchObservedRunningTime="2025-05-07 23:59:53.461905046 +0000 UTC m=+1.124232375" May 7 23:59:53.462086 kubelet[2550]: I0507 23:59:53.462005 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.462000364 podStartE2EDuration="1.462000364s" podCreationTimestamp="2025-05-07 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:59:53.454247295 +0000 UTC m=+1.116574624" watchObservedRunningTime="2025-05-07 23:59:53.462000364 +0000 UTC m=+1.124327733" May 7 23:59:56.220566 sudo[1640]: pam_unix(sudo:session): session closed for user root May 7 23:59:56.221796 sshd[1639]: Connection closed by 10.0.0.1 port 48602 May 7 23:59:56.222327 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 7 23:59:56.224935 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:48602.service: Deactivated successfully. May 7 23:59:56.226607 systemd[1]: session-7.scope: Deactivated successfully. May 7 23:59:56.226767 systemd[1]: session-7.scope: Consumed 8.333s CPU time, 260.8M memory peak. May 7 23:59:56.228181 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. May 7 23:59:56.229208 systemd-logind[1439]: Removed session 7. May 7 23:59:58.387308 kubelet[2550]: I0507 23:59:58.387188 2550 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 7 23:59:58.388167 containerd[1453]: time="2025-05-07T23:59:58.388005116Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 7 23:59:58.388440 kubelet[2550]: I0507 23:59:58.388265 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 7 23:59:58.986435 systemd[1]: Created slice kubepods-besteffort-podcf6c64c8_442b_4cdb_aa95_06e3a60a3a7b.slice - libcontainer container kubepods-besteffort-podcf6c64c8_442b_4cdb_aa95_06e3a60a3a7b.slice. May 7 23:59:59.014645 systemd[1]: Created slice kubepods-burstable-pod2a1d1d83_bccf_4511_9e39_3c2e49cae2bd.slice - libcontainer container kubepods-burstable-pod2a1d1d83_bccf_4511_9e39_3c2e49cae2bd.slice. May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045784 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkmz\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045831 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hostproc\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045850 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-cgroup\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045865 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-lib-modules\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045880 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-xtables-lock\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.045963 kubelet[2550]: I0507 23:59:59.045894 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-clustermesh-secrets\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046235 kubelet[2550]: I0507 23:59:59.045910 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-kube-proxy\") pod \"kube-proxy-vqvc7\" (UID: \"cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b\") " pod="kube-system/kube-proxy-vqvc7" May 7 23:59:59.046235 kubelet[2550]: I0507 23:59:59.045926 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-xtables-lock\") pod \"kube-proxy-vqvc7\" (UID: \"cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b\") " pod="kube-system/kube-proxy-vqvc7" May 7 23:59:59.046235 kubelet[2550]: I0507 23:59:59.045943 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj6h\" (UniqueName: \"kubernetes.io/projected/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-kube-api-access-lnj6h\") pod \"kube-proxy-vqvc7\" (UID: \"cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b\") " pod="kube-system/kube-proxy-vqvc7" May 7 23:59:59.046235 kubelet[2550]: I0507 23:59:59.045985 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cni-path\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046235 kubelet[2550]: I0507 23:59:59.046001 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-net\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046433 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hubble-tls\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046478 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-lib-modules\") pod \"kube-proxy-vqvc7\" (UID: \"cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b\") " pod="kube-system/kube-proxy-vqvc7" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046503 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-run\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046529 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-config-path\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046546 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-kernel\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046640 kubelet[2550]: I0507 23:59:59.046562 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-bpf-maps\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.046796 kubelet[2550]: I0507 23:59:59.046577 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-etc-cni-netd\") pod \"cilium-84lvs\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " pod="kube-system/cilium-84lvs" May 7 23:59:59.159057 kubelet[2550]: E0507 23:59:59.159018 2550 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 7 23:59:59.159057 kubelet[2550]: E0507 23:59:59.159051 2550 projected.go:194] Error preparing data for projected volume kube-api-access-lnj6h for pod kube-system/kube-proxy-vqvc7: configmap "kube-root-ca.crt" not found May 7 23:59:59.159205 kubelet[2550]: E0507 23:59:59.159064 2550 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 7 23:59:59.159205 kubelet[2550]: E0507 23:59:59.159116 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-kube-api-access-lnj6h podName:cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b nodeName:}" failed. No retries permitted until 2025-05-07 23:59:59.659093126 +0000 UTC m=+7.321420455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lnj6h" (UniqueName: "kubernetes.io/projected/cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b-kube-api-access-lnj6h") pod "kube-proxy-vqvc7" (UID: "cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b") : configmap "kube-root-ca.crt" not found May 7 23:59:59.159354 kubelet[2550]: E0507 23:59:59.159309 2550 projected.go:194] Error preparing data for projected volume kube-api-access-8tkmz for pod kube-system/cilium-84lvs: configmap "kube-root-ca.crt" not found May 7 23:59:59.159587 kubelet[2550]: E0507 23:59:59.159441 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz podName:2a1d1d83-bccf-4511-9e39-3c2e49cae2bd nodeName:}" failed. No retries permitted until 2025-05-07 23:59:59.659339322 +0000 UTC m=+7.321666651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8tkmz" (UniqueName: "kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz") pod "cilium-84lvs" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd") : configmap "kube-root-ca.crt" not found May 7 23:59:59.442559 systemd[1]: Created slice kubepods-besteffort-pod283c3682_07eb_4a65_b6c1_f0766a0e2485.slice - libcontainer container kubepods-besteffort-pod283c3682_07eb_4a65_b6c1_f0766a0e2485.slice. May 7 23:59:59.450190 kubelet[2550]: I0507 23:59:59.450137 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s86s\" (UniqueName: \"kubernetes.io/projected/283c3682-07eb-4a65-b6c1-f0766a0e2485-kube-api-access-9s86s\") pod \"cilium-operator-6c4d7847fc-rr4jn\" (UID: \"283c3682-07eb-4a65-b6c1-f0766a0e2485\") " pod="kube-system/cilium-operator-6c4d7847fc-rr4jn" May 7 23:59:59.450190 kubelet[2550]: I0507 23:59:59.450180 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283c3682-07eb-4a65-b6c1-f0766a0e2485-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rr4jn\" (UID: \"283c3682-07eb-4a65-b6c1-f0766a0e2485\") " pod="kube-system/cilium-operator-6c4d7847fc-rr4jn" May 7 23:59:59.751361 containerd[1453]: time="2025-05-07T23:59:59.751223506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rr4jn,Uid:283c3682-07eb-4a65-b6c1-f0766a0e2485,Namespace:kube-system,Attempt:0,}" May 7 23:59:59.773574 containerd[1453]: time="2025-05-07T23:59:59.773458593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:59.773574 containerd[1453]: time="2025-05-07T23:59:59.773534312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:59.773574 containerd[1453]: time="2025-05-07T23:59:59.773546152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.774414 containerd[1453]: time="2025-05-07T23:59:59.773639151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.794589 systemd[1]: Started cri-containerd-88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060.scope - libcontainer container 88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060. May 7 23:59:59.822187 containerd[1453]: time="2025-05-07T23:59:59.822144782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rr4jn,Uid:283c3682-07eb-4a65-b6c1-f0766a0e2485,Namespace:kube-system,Attempt:0,} returns sandbox id \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\"" May 7 23:59:59.824804 containerd[1453]: time="2025-05-07T23:59:59.824630863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 7 23:59:59.897500 containerd[1453]: time="2025-05-07T23:59:59.897451389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqvc7,Uid:cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b,Namespace:kube-system,Attempt:0,}" May 7 23:59:59.917009 containerd[1453]: time="2025-05-07T23:59:59.916308611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:59.917159 containerd[1453]: time="2025-05-07T23:59:59.916843482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:59.917159 containerd[1453]: time="2025-05-07T23:59:59.916863802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.917159 containerd[1453]: time="2025-05-07T23:59:59.916966760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.919596 containerd[1453]: time="2025-05-07T23:59:59.919205125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84lvs,Uid:2a1d1d83-bccf-4511-9e39-3c2e49cae2bd,Namespace:kube-system,Attempt:0,}" May 7 23:59:59.934495 systemd[1]: Started cri-containerd-028d83fd925dff321f8b8fbe06a404b7169e97697f9b0a75cb0361501ac21dad.scope - libcontainer container 028d83fd925dff321f8b8fbe06a404b7169e97697f9b0a75cb0361501ac21dad. May 7 23:59:59.940981 containerd[1453]: time="2025-05-07T23:59:59.940854542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:59:59.941491 containerd[1453]: time="2025-05-07T23:59:59.941323934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:59:59.941491 containerd[1453]: time="2025-05-07T23:59:59.941344734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.941491 containerd[1453]: time="2025-05-07T23:59:59.941446372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:59:59.967522 systemd[1]: Started cri-containerd-d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078.scope - libcontainer container d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078. May 7 23:59:59.968479 containerd[1453]: time="2025-05-07T23:59:59.968429065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqvc7,Uid:cf6c64c8-442b-4cdb-aa95-06e3a60a3a7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"028d83fd925dff321f8b8fbe06a404b7169e97697f9b0a75cb0361501ac21dad\"" May 7 23:59:59.971295 containerd[1453]: time="2025-05-07T23:59:59.971000064Z" level=info msg="CreateContainer within sandbox \"028d83fd925dff321f8b8fbe06a404b7169e97697f9b0a75cb0361501ac21dad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 7 23:59:59.984657 containerd[1453]: time="2025-05-07T23:59:59.984516610Z" level=info msg="CreateContainer within sandbox \"028d83fd925dff321f8b8fbe06a404b7169e97697f9b0a75cb0361501ac21dad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1ec43073a7d39042cba7f4892bf829c16f9932aa6002f7c03e1023f601db0e4\"" May 7 23:59:59.988637 containerd[1453]: time="2025-05-07T23:59:59.988526066Z" level=info msg="StartContainer for \"e1ec43073a7d39042cba7f4892bf829c16f9932aa6002f7c03e1023f601db0e4\"" May 7 23:59:59.999562 containerd[1453]: time="2025-05-07T23:59:59.999501413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84lvs,Uid:2a1d1d83-bccf-4511-9e39-3c2e49cae2bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\"" May 8 00:00:00.023584 systemd[1]: Started cri-containerd-e1ec43073a7d39042cba7f4892bf829c16f9932aa6002f7c03e1023f601db0e4.scope - libcontainer container e1ec43073a7d39042cba7f4892bf829c16f9932aa6002f7c03e1023f601db0e4. May 8 00:00:00.025383 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 8 00:00:00.036723 systemd[1]: logrotate.service: Deactivated successfully. May 8 00:00:00.055639 containerd[1453]: time="2025-05-08T00:00:00.055587969Z" level=info msg="StartContainer for \"e1ec43073a7d39042cba7f4892bf829c16f9932aa6002f7c03e1023f601db0e4\" returns successfully" May 8 00:00:00.456419 kubelet[2550]: I0508 00:00:00.455897 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqvc7" podStartSLOduration=2.455878076 podStartE2EDuration="2.455878076s" podCreationTimestamp="2025-05-07 23:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:00:00.45563632 +0000 UTC m=+8.117963649" watchObservedRunningTime="2025-05-08 00:00:00.455878076 +0000 UTC m=+8.118205365" May 8 00:00:00.774374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810436474.mount: Deactivated successfully. May 8 00:00:01.036483 containerd[1453]: time="2025-05-08T00:00:01.036022788Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:00:01.036483 containerd[1453]: time="2025-05-08T00:00:01.036393383Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 00:00:01.037196 containerd[1453]: time="2025-05-08T00:00:01.037165532Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:00:01.038798 containerd[1453]: time="2025-05-08T00:00:01.038756589Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.214085807s" May 8 00:00:01.038983 containerd[1453]: time="2025-05-08T00:00:01.038800508Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:00:01.042832 containerd[1453]: time="2025-05-08T00:00:01.042662453Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:00:01.044404 containerd[1453]: time="2025-05-08T00:00:01.043799797Z" level=info msg="CreateContainer within sandbox \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:00:01.061129 containerd[1453]: time="2025-05-08T00:00:01.061066351Z" level=info msg="CreateContainer within sandbox \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\"" May 8 00:00:01.072845 containerd[1453]: time="2025-05-08T00:00:01.072559747Z" level=info msg="StartContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\"" May 8 00:00:01.096577 systemd[1]: Started cri-containerd-0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f.scope - libcontainer container 0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f. May 8 00:00:01.124981 containerd[1453]: time="2025-05-08T00:00:01.124929960Z" level=info msg="StartContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" returns successfully" May 8 00:00:01.491593 kubelet[2550]: I0508 00:00:01.491201 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rr4jn" podStartSLOduration=1.274083776 podStartE2EDuration="2.491183579s" podCreationTimestamp="2025-05-07 23:59:59 +0000 UTC" firstStartedPulling="2025-05-07 23:59:59.823924994 +0000 UTC m=+7.486252323" lastFinishedPulling="2025-05-08 00:00:01.041024797 +0000 UTC m=+8.703352126" observedRunningTime="2025-05-08 00:00:01.49115142 +0000 UTC m=+9.153478709" watchObservedRunningTime="2025-05-08 00:00:01.491183579 +0000 UTC m=+9.153510908" May 8 00:00:04.127941 update_engine[1443]: I20250508 00:00:04.127878 1443 update_attempter.cc:509] Updating boot flags... May 8 00:00:04.166362 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2984) May 8 00:00:04.213306 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2982) May 8 00:00:04.243383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2982) May 8 00:00:07.131671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076891255.mount: Deactivated successfully. May 8 00:00:08.412776 containerd[1453]: time="2025-05-08T00:00:08.412722226Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:00:08.413744 containerd[1453]: time="2025-05-08T00:00:08.413522738Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 00:00:08.414383 containerd[1453]: time="2025-05-08T00:00:08.414345609Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:00:08.416361 containerd[1453]: time="2025-05-08T00:00:08.416321469Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.373497339s" May 8 00:00:08.416449 containerd[1453]: time="2025-05-08T00:00:08.416363229Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:00:08.425281 containerd[1453]: time="2025-05-08T00:00:08.425116501Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:00:08.454551 containerd[1453]: time="2025-05-08T00:00:08.454502845Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\"" May 8 00:00:08.455879 containerd[1453]: time="2025-05-08T00:00:08.455561714Z" level=info msg="StartContainer for \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\"" May 8 00:00:08.484746 systemd[1]: Started cri-containerd-b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756.scope - libcontainer container b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756. May 8 00:00:08.515745 containerd[1453]: time="2025-05-08T00:00:08.513991486Z" level=info msg="StartContainer for \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\" returns successfully" May 8 00:00:08.578809 systemd[1]: cri-containerd-b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756.scope: Deactivated successfully. May 8 00:00:08.724221 containerd[1453]: time="2025-05-08T00:00:08.714408067Z" level=info msg="shim disconnected" id=b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756 namespace=k8s.io May 8 00:00:08.724221 containerd[1453]: time="2025-05-08T00:00:08.724134769Z" level=warning msg="cleaning up after shim disconnected" id=b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756 namespace=k8s.io May 8 00:00:08.724221 containerd[1453]: time="2025-05-08T00:00:08.724154409Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:00:09.448130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756-rootfs.mount: Deactivated successfully. May 8 00:00:09.497445 containerd[1453]: time="2025-05-08T00:00:09.497398087Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:00:09.514973 containerd[1453]: time="2025-05-08T00:00:09.514903958Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\"" May 8 00:00:09.515746 containerd[1453]: time="2025-05-08T00:00:09.515705191Z" level=info msg="StartContainer for \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\"" May 8 00:00:09.547507 systemd[1]: Started cri-containerd-3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3.scope - libcontainer container 3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3. May 8 00:00:09.568835 containerd[1453]: time="2025-05-08T00:00:09.568760361Z" level=info msg="StartContainer for \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\" returns successfully" May 8 00:00:09.582575 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:00:09.583066 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:00:09.583371 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:00:09.589670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:00:09.589865 systemd[1]: cri-containerd-3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3.scope: Deactivated successfully. May 8 00:00:09.604354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:00:09.614112 containerd[1453]: time="2025-05-08T00:00:09.614049085Z" level=info msg="shim disconnected" id=3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3 namespace=k8s.io May 8 00:00:09.614677 containerd[1453]: time="2025-05-08T00:00:09.614476481Z" level=warning msg="cleaning up after shim disconnected" id=3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3 namespace=k8s.io May 8 00:00:09.614677 containerd[1453]: time="2025-05-08T00:00:09.614517601Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:00:10.448214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3-rootfs.mount: Deactivated successfully. May 8 00:00:10.501101 containerd[1453]: time="2025-05-08T00:00:10.501058573Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:00:10.516309 containerd[1453]: time="2025-05-08T00:00:10.516262434Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\"" May 8 00:00:10.517712 containerd[1453]: time="2025-05-08T00:00:10.516796829Z" level=info msg="StartContainer for \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\"" May 8 00:00:10.551441 systemd[1]: Started cri-containerd-8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597.scope - libcontainer container 8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597. May 8 00:00:10.582080 containerd[1453]: time="2025-05-08T00:00:10.581468115Z" level=info msg="StartContainer for \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\" returns successfully" May 8 00:00:10.596977 systemd[1]: cri-containerd-8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597.scope: Deactivated successfully. May 8 00:00:10.617015 containerd[1453]: time="2025-05-08T00:00:10.616817190Z" level=info msg="shim disconnected" id=8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597 namespace=k8s.io May 8 00:00:10.617015 containerd[1453]: time="2025-05-08T00:00:10.616874830Z" level=warning msg="cleaning up after shim disconnected" id=8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597 namespace=k8s.io May 8 00:00:10.617015 containerd[1453]: time="2025-05-08T00:00:10.616882790Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:00:11.448226 systemd[1]: run-containerd-runc-k8s.io-8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597-runc.0Bv1Cw.mount: Deactivated successfully. May 8 00:00:11.448356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597-rootfs.mount: Deactivated successfully. May 8 00:00:11.505621 containerd[1453]: time="2025-05-08T00:00:11.505566832Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:00:11.523410 containerd[1453]: time="2025-05-08T00:00:11.523292236Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\"" May 8 00:00:11.524297 containerd[1453]: time="2025-05-08T00:00:11.524048669Z" level=info msg="StartContainer for \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\"" May 8 00:00:11.524451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137336643.mount: Deactivated successfully. May 8 00:00:11.551437 systemd[1]: Started cri-containerd-c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f.scope - libcontainer container c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f. May 8 00:00:11.569781 systemd[1]: cri-containerd-c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f.scope: Deactivated successfully. May 8 00:00:11.572125 containerd[1453]: time="2025-05-08T00:00:11.571961529Z" level=info msg="StartContainer for \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\" returns successfully" May 8 00:00:11.589662 containerd[1453]: time="2025-05-08T00:00:11.589605974Z" level=info msg="shim disconnected" id=c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f namespace=k8s.io May 8 00:00:11.589662 containerd[1453]: time="2025-05-08T00:00:11.589658973Z" level=warning msg="cleaning up after shim disconnected" id=c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f namespace=k8s.io May 8 00:00:11.589662 containerd[1453]: time="2025-05-08T00:00:11.589667533Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:00:12.448365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f-rootfs.mount: Deactivated successfully. May 8 00:00:12.517417 containerd[1453]: time="2025-05-08T00:00:12.517379299Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:00:12.533139 containerd[1453]: time="2025-05-08T00:00:12.533082407Z" level=info msg="CreateContainer within sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\"" May 8 00:00:12.533918 containerd[1453]: time="2025-05-08T00:00:12.533649443Z" level=info msg="StartContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\"" May 8 00:00:12.568473 systemd[1]: Started cri-containerd-12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5.scope - libcontainer container 12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5. May 8 00:00:12.593462 containerd[1453]: time="2025-05-08T00:00:12.593411860Z" level=info msg="StartContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" returns successfully" May 8 00:00:12.697573 kubelet[2550]: I0508 00:00:12.697543 2550 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:00:12.774336 systemd[1]: Created slice kubepods-burstable-podc07954f9_727f_46f5_90a0_e0861d57b199.slice - libcontainer container kubepods-burstable-podc07954f9_727f_46f5_90a0_e0861d57b199.slice. May 8 00:00:12.779651 systemd[1]: Created slice kubepods-burstable-podee270671_e312_430e_830c_47b6a16ab1b4.slice - libcontainer container kubepods-burstable-podee270671_e312_430e_830c_47b6a16ab1b4.slice. May 8 00:00:12.844010 kubelet[2550]: I0508 00:00:12.843858 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee270671-e312-430e-830c-47b6a16ab1b4-config-volume\") pod \"coredns-668d6bf9bc-tlpxc\" (UID: \"ee270671-e312-430e-830c-47b6a16ab1b4\") " pod="kube-system/coredns-668d6bf9bc-tlpxc" May 8 00:00:12.844010 kubelet[2550]: I0508 00:00:12.843905 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ckn\" (UniqueName: \"kubernetes.io/projected/ee270671-e312-430e-830c-47b6a16ab1b4-kube-api-access-c2ckn\") pod \"coredns-668d6bf9bc-tlpxc\" (UID: \"ee270671-e312-430e-830c-47b6a16ab1b4\") " pod="kube-system/coredns-668d6bf9bc-tlpxc" May 8 00:00:12.844010 kubelet[2550]: I0508 00:00:12.843927 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52v5x\" (UniqueName: \"kubernetes.io/projected/c07954f9-727f-46f5-90a0-e0861d57b199-kube-api-access-52v5x\") pod \"coredns-668d6bf9bc-8tf2g\" (UID: \"c07954f9-727f-46f5-90a0-e0861d57b199\") " pod="kube-system/coredns-668d6bf9bc-8tf2g" May 8 00:00:12.844010 kubelet[2550]: I0508 00:00:12.843947 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c07954f9-727f-46f5-90a0-e0861d57b199-config-volume\") pod \"coredns-668d6bf9bc-8tf2g\" (UID: \"c07954f9-727f-46f5-90a0-e0861d57b199\") " pod="kube-system/coredns-668d6bf9bc-8tf2g" May 8 00:00:13.079416 containerd[1453]: time="2025-05-08T00:00:13.079211526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tf2g,Uid:c07954f9-727f-46f5-90a0-e0861d57b199,Namespace:kube-system,Attempt:0,}" May 8 00:00:13.082959 containerd[1453]: time="2025-05-08T00:00:13.082921896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlpxc,Uid:ee270671-e312-430e-830c-47b6a16ab1b4,Namespace:kube-system,Attempt:0,}" May 8 00:00:13.533390 kubelet[2550]: I0508 00:00:13.533195 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-84lvs" podStartSLOduration=7.113128718 podStartE2EDuration="15.533178271s" podCreationTimestamp="2025-05-07 23:59:58 +0000 UTC" firstStartedPulling="2025-05-08 00:00:00.001067508 +0000 UTC m=+7.663394837" lastFinishedPulling="2025-05-08 00:00:08.421117061 +0000 UTC m=+16.083444390" observedRunningTime="2025-05-08 00:00:13.532713595 +0000 UTC m=+21.195040964" watchObservedRunningTime="2025-05-08 00:00:13.533178271 +0000 UTC m=+21.195505560" May 8 00:00:14.771626 systemd-networkd[1389]: cilium_host: Link UP May 8 00:00:14.771747 systemd-networkd[1389]: cilium_net: Link UP May 8 00:00:14.771878 systemd-networkd[1389]: cilium_net: Gained carrier May 8 00:00:14.771999 systemd-networkd[1389]: cilium_host: Gained carrier May 8 00:00:14.853486 systemd-networkd[1389]: cilium_vxlan: Link UP May 8 00:00:14.853493 systemd-networkd[1389]: cilium_vxlan: Gained carrier May 8 00:00:15.140402 systemd-networkd[1389]: cilium_net: Gained IPv6LL May 8 00:00:15.154433 kernel: NET: Registered PF_ALG protocol family May 8 00:00:15.475444 systemd-networkd[1389]: cilium_host: Gained IPv6LL May 8 00:00:15.713059 systemd-networkd[1389]: lxc_health: Link UP May 8 00:00:15.718138 systemd-networkd[1389]: lxc_health: Gained carrier May 8 00:00:16.051387 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL May 8 00:00:16.203348 kernel: eth0: renamed from tmp8db30 May 8 00:00:16.212002 systemd-networkd[1389]: lxca995e5d96d17: Link UP May 8 00:00:16.212661 systemd-networkd[1389]: lxc930d0d9a2fde: Link UP May 8 00:00:16.218644 systemd-networkd[1389]: lxca995e5d96d17: Gained carrier May 8 00:00:16.219291 kernel: eth0: renamed from tmpc73a6 May 8 00:00:16.224830 systemd-networkd[1389]: lxc930d0d9a2fde: Gained carrier May 8 00:00:17.651522 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 8 00:00:17.971447 systemd-networkd[1389]: lxca995e5d96d17: Gained IPv6LL May 8 00:00:18.227440 systemd-networkd[1389]: lxc930d0d9a2fde: Gained IPv6LL May 8 00:00:19.649114 containerd[1453]: time="2025-05-08T00:00:19.648985162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:00:19.649114 containerd[1453]: time="2025-05-08T00:00:19.649093042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:00:19.649632 containerd[1453]: time="2025-05-08T00:00:19.649485759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:00:19.649632 containerd[1453]: time="2025-05-08T00:00:19.649593118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:00:19.667431 systemd[1]: Started cri-containerd-c73a6005aeedc6762f728ed92cb1f775cf5799cf127812a78653b86f0a51b8de.scope - libcontainer container c73a6005aeedc6762f728ed92cb1f775cf5799cf127812a78653b86f0a51b8de. May 8 00:00:19.669742 containerd[1453]: time="2025-05-08T00:00:19.669670671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:00:19.669742 containerd[1453]: time="2025-05-08T00:00:19.669721271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:00:19.670031 containerd[1453]: time="2025-05-08T00:00:19.669736151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:00:19.670164 containerd[1453]: time="2025-05-08T00:00:19.670094348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:00:19.692491 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:00:19.697449 systemd[1]: Started cri-containerd-8db30a71a1a457a956268984b02498c92b9fd8768c230f0bdd28fe4c6f78aca4.scope - libcontainer container 8db30a71a1a457a956268984b02498c92b9fd8768c230f0bdd28fe4c6f78aca4. May 8 00:00:19.714198 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:00:19.719602 containerd[1453]: time="2025-05-08T00:00:19.719565034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tf2g,Uid:c07954f9-727f-46f5-90a0-e0861d57b199,Namespace:kube-system,Attempt:0,} returns sandbox id \"c73a6005aeedc6762f728ed92cb1f775cf5799cf127812a78653b86f0a51b8de\"" May 8 00:00:19.724468 containerd[1453]: time="2025-05-08T00:00:19.724433844Z" level=info msg="CreateContainer within sandbox \"c73a6005aeedc6762f728ed92cb1f775cf5799cf127812a78653b86f0a51b8de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:00:19.737987 containerd[1453]: time="2025-05-08T00:00:19.737937678Z" level=info msg="CreateContainer within sandbox \"c73a6005aeedc6762f728ed92cb1f775cf5799cf127812a78653b86f0a51b8de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c010149355e0d9f268309f0fd1cf7db236ede70604a3ceae259c4fd07d54e465\"" May 8 00:00:19.738204 containerd[1453]: time="2025-05-08T00:00:19.738171436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlpxc,Uid:ee270671-e312-430e-830c-47b6a16ab1b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8db30a71a1a457a956268984b02498c92b9fd8768c230f0bdd28fe4c6f78aca4\"" May 8 00:00:19.738625 containerd[1453]: time="2025-05-08T00:00:19.738598794Z" level=info msg="StartContainer for \"c010149355e0d9f268309f0fd1cf7db236ede70604a3ceae259c4fd07d54e465\"" May 8 00:00:19.740684 containerd[1453]: time="2025-05-08T00:00:19.740580341Z" level=info msg="CreateContainer within sandbox \"8db30a71a1a457a956268984b02498c92b9fd8768c230f0bdd28fe4c6f78aca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:00:19.750310 containerd[1453]: time="2025-05-08T00:00:19.750254400Z" level=info msg="CreateContainer within sandbox \"8db30a71a1a457a956268984b02498c92b9fd8768c230f0bdd28fe4c6f78aca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9c95fc20ce0419b5c96d1baf1fb3e07face0c011cc733b067ac545b9113625f\"" May 8 00:00:19.750854 containerd[1453]: time="2025-05-08T00:00:19.750830116Z" level=info msg="StartContainer for \"d9c95fc20ce0419b5c96d1baf1fb3e07face0c011cc733b067ac545b9113625f\"" May 8 00:00:19.763427 systemd[1]: Started cri-containerd-c010149355e0d9f268309f0fd1cf7db236ede70604a3ceae259c4fd07d54e465.scope - libcontainer container c010149355e0d9f268309f0fd1cf7db236ede70604a3ceae259c4fd07d54e465. May 8 00:00:19.782469 systemd[1]: Started cri-containerd-d9c95fc20ce0419b5c96d1baf1fb3e07face0c011cc733b067ac545b9113625f.scope - libcontainer container d9c95fc20ce0419b5c96d1baf1fb3e07face0c011cc733b067ac545b9113625f. May 8 00:00:19.806191 containerd[1453]: time="2025-05-08T00:00:19.806152085Z" level=info msg="StartContainer for \"c010149355e0d9f268309f0fd1cf7db236ede70604a3ceae259c4fd07d54e465\" returns successfully" May 8 00:00:19.815478 containerd[1453]: time="2025-05-08T00:00:19.815436506Z" level=info msg="StartContainer for \"d9c95fc20ce0419b5c96d1baf1fb3e07face0c011cc733b067ac545b9113625f\" returns successfully" May 8 00:00:20.111893 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). May 8 00:00:20.161793 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:20.163086 sshd-session[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:20.166726 systemd-logind[1439]: New session 8 of user core. May 8 00:00:20.176407 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:00:20.297921 sshd[3956]: Connection closed by 10.0.0.1 port 41818 May 8 00:00:20.298226 sshd-session[3954]: pam_unix(sshd:session): session closed for user core May 8 00:00:20.301609 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:41818.service: Deactivated successfully. May 8 00:00:20.303385 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:00:20.304846 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. May 8 00:00:20.305680 systemd-logind[1439]: Removed session 8. May 8 00:00:20.540824 kubelet[2550]: I0508 00:00:20.540761 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tlpxc" podStartSLOduration=21.540744905 podStartE2EDuration="21.540744905s" podCreationTimestamp="2025-05-07 23:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:00:20.540382747 +0000 UTC m=+28.202710076" watchObservedRunningTime="2025-05-08 00:00:20.540744905 +0000 UTC m=+28.203072234" May 8 00:00:20.552555 kubelet[2550]: I0508 00:00:20.552477 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8tf2g" podStartSLOduration=21.552452673 podStartE2EDuration="21.552452673s" podCreationTimestamp="2025-05-07 23:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:00:20.550534925 +0000 UTC m=+28.212862294" watchObservedRunningTime="2025-05-08 00:00:20.552452673 +0000 UTC m=+28.214780002" May 8 00:00:25.312839 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:43806.service - OpenSSH per-connection server daemon (10.0.0.1:43806). May 8 00:00:25.360126 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 43806 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:25.361372 sshd-session[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:25.364998 systemd-logind[1439]: New session 9 of user core. May 8 00:00:25.374410 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:00:25.497418 sshd[3981]: Connection closed by 10.0.0.1 port 43806 May 8 00:00:25.497785 sshd-session[3979]: pam_unix(sshd:session): session closed for user core May 8 00:00:25.501048 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:43806.service: Deactivated successfully. May 8 00:00:25.502770 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:00:25.503437 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. May 8 00:00:25.504304 systemd-logind[1439]: Removed session 9. May 8 00:00:30.508937 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:43818.service - OpenSSH per-connection server daemon (10.0.0.1:43818). May 8 00:00:30.557625 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 43818 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:30.558941 sshd-session[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:30.562925 systemd-logind[1439]: New session 10 of user core. May 8 00:00:30.573474 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:00:30.707454 sshd[4002]: Connection closed by 10.0.0.1 port 43818 May 8 00:00:30.708071 sshd-session[4000]: pam_unix(sshd:session): session closed for user core May 8 00:00:30.712057 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:43818.service: Deactivated successfully. May 8 00:00:30.714838 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:00:30.715451 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. May 8 00:00:30.716194 systemd-logind[1439]: Removed session 10. May 8 00:00:35.720645 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:57578.service - OpenSSH per-connection server daemon (10.0.0.1:57578). May 8 00:00:35.771066 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 57578 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:35.772175 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:35.775815 systemd-logind[1439]: New session 11 of user core. May 8 00:00:35.782478 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:00:35.903858 sshd[4018]: Connection closed by 10.0.0.1 port 57578 May 8 00:00:35.903708 sshd-session[4016]: pam_unix(sshd:session): session closed for user core May 8 00:00:35.918355 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:57578.service: Deactivated successfully. May 8 00:00:35.920977 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:00:35.922308 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. May 8 00:00:35.935531 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:57584.service - OpenSSH per-connection server daemon (10.0.0.1:57584). May 8 00:00:35.938378 systemd-logind[1439]: Removed session 11. May 8 00:00:35.978005 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 57584 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:35.978004 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:35.982452 systemd-logind[1439]: New session 12 of user core. May 8 00:00:35.993422 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:00:36.152696 sshd[4035]: Connection closed by 10.0.0.1 port 57584 May 8 00:00:36.155565 sshd-session[4032]: pam_unix(sshd:session): session closed for user core May 8 00:00:36.167394 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. May 8 00:00:36.168150 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:57584.service: Deactivated successfully. May 8 00:00:36.169647 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:00:36.172393 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:57590.service - OpenSSH per-connection server daemon (10.0.0.1:57590). May 8 00:00:36.173737 systemd-logind[1439]: Removed session 12. May 8 00:00:36.228240 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 57590 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:36.229428 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:36.233427 systemd-logind[1439]: New session 13 of user core. May 8 00:00:36.245510 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:00:36.360237 sshd[4048]: Connection closed by 10.0.0.1 port 57590 May 8 00:00:36.361133 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 8 00:00:36.365667 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:57590.service: Deactivated successfully. May 8 00:00:36.367485 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:00:36.368820 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. May 8 00:00:36.369608 systemd-logind[1439]: Removed session 13. May 8 00:00:41.371770 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:57598.service - OpenSSH per-connection server daemon (10.0.0.1:57598). May 8 00:00:41.417450 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 57598 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:41.418806 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:41.422461 systemd-logind[1439]: New session 14 of user core. May 8 00:00:41.432450 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:00:41.544186 sshd[4064]: Connection closed by 10.0.0.1 port 57598 May 8 00:00:41.544539 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 8 00:00:41.547810 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:57598.service: Deactivated successfully. May 8 00:00:41.550635 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:00:41.551644 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. May 8 00:00:41.552678 systemd-logind[1439]: Removed session 14. May 8 00:00:46.556667 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:48290.service - OpenSSH per-connection server daemon (10.0.0.1:48290). May 8 00:00:46.601433 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 48290 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:46.602552 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:46.606176 systemd-logind[1439]: New session 15 of user core. May 8 00:00:46.614421 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:00:46.724216 sshd[4080]: Connection closed by 10.0.0.1 port 48290 May 8 00:00:46.724698 sshd-session[4078]: pam_unix(sshd:session): session closed for user core May 8 00:00:46.736376 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:48290.service: Deactivated successfully. May 8 00:00:46.738905 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:00:46.739570 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. May 8 00:00:46.749965 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:48300.service - OpenSSH per-connection server daemon (10.0.0.1:48300). May 8 00:00:46.751032 systemd-logind[1439]: Removed session 15. May 8 00:00:46.790599 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:46.791931 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:46.795920 systemd-logind[1439]: New session 16 of user core. May 8 00:00:46.804389 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:00:47.005404 sshd[4095]: Connection closed by 10.0.0.1 port 48300 May 8 00:00:47.006485 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 8 00:00:47.023398 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:48300.service: Deactivated successfully. May 8 00:00:47.024881 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:00:47.025671 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. May 8 00:00:47.027522 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). May 8 00:00:47.028764 systemd-logind[1439]: Removed session 16. May 8 00:00:47.077837 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:47.079073 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:47.083260 systemd-logind[1439]: New session 17 of user core. May 8 00:00:47.094387 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:00:47.832362 sshd[4109]: Connection closed by 10.0.0.1 port 48304 May 8 00:00:47.835251 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 8 00:00:47.844533 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:48304.service: Deactivated successfully. May 8 00:00:47.845185 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. May 8 00:00:47.845979 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:00:47.856617 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). May 8 00:00:47.860536 systemd-logind[1439]: Removed session 17. May 8 00:00:47.900613 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:47.901878 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:47.905555 systemd-logind[1439]: New session 18 of user core. May 8 00:00:47.915427 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:00:48.136373 sshd[4132]: Connection closed by 10.0.0.1 port 48312 May 8 00:00:48.137818 sshd-session[4128]: pam_unix(sshd:session): session closed for user core May 8 00:00:48.150083 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:48318.service - OpenSSH per-connection server daemon (10.0.0.1:48318). May 8 00:00:48.150508 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:48312.service: Deactivated successfully. May 8 00:00:48.151831 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:00:48.153303 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. May 8 00:00:48.154404 systemd-logind[1439]: Removed session 18. May 8 00:00:48.199118 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 48318 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:48.197410 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:48.204347 systemd-logind[1439]: New session 19 of user core. May 8 00:00:48.210427 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:00:48.335472 sshd[4146]: Connection closed by 10.0.0.1 port 48318 May 8 00:00:48.335387 sshd-session[4141]: pam_unix(sshd:session): session closed for user core May 8 00:00:48.339187 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:48318.service: Deactivated successfully. May 8 00:00:48.341061 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:00:48.341762 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. May 8 00:00:48.342579 systemd-logind[1439]: Removed session 19. May 8 00:00:53.346503 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:52786.service - OpenSSH per-connection server daemon (10.0.0.1:52786). May 8 00:00:53.390916 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 52786 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:53.392071 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:53.395430 systemd-logind[1439]: New session 20 of user core. May 8 00:00:53.403489 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:00:53.507408 sshd[4165]: Connection closed by 10.0.0.1 port 52786 May 8 00:00:53.507738 sshd-session[4163]: pam_unix(sshd:session): session closed for user core May 8 00:00:53.511040 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:52786.service: Deactivated successfully. May 8 00:00:53.512686 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:00:53.513294 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. May 8 00:00:53.514019 systemd-logind[1439]: Removed session 20. May 8 00:00:58.522594 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:52794.service - OpenSSH per-connection server daemon (10.0.0.1:52794). May 8 00:00:58.567341 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 52794 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:00:58.568613 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:00:58.572404 systemd-logind[1439]: New session 21 of user core. May 8 00:00:58.581481 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:00:58.689257 sshd[4180]: Connection closed by 10.0.0.1 port 52794 May 8 00:00:58.689027 sshd-session[4178]: pam_unix(sshd:session): session closed for user core May 8 00:00:58.692801 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. May 8 00:00:58.693086 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:52794.service: Deactivated successfully. May 8 00:00:58.694750 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:00:58.695626 systemd-logind[1439]: Removed session 21. May 8 00:01:03.702540 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:49438.service - OpenSSH per-connection server daemon (10.0.0.1:49438). May 8 00:01:03.748048 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:01:03.749076 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:01:03.752697 systemd-logind[1439]: New session 22 of user core. May 8 00:01:03.765415 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:01:03.872908 sshd[4197]: Connection closed by 10.0.0.1 port 49438 May 8 00:01:03.873223 sshd-session[4195]: pam_unix(sshd:session): session closed for user core May 8 00:01:03.884540 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:49438.service: Deactivated successfully. May 8 00:01:03.886684 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:01:03.887828 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. May 8 00:01:03.889033 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:49448.service - OpenSSH per-connection server daemon (10.0.0.1:49448). May 8 00:01:03.889801 systemd-logind[1439]: Removed session 22. May 8 00:01:03.934406 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 49448 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:01:03.935527 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:01:03.939215 systemd-logind[1439]: New session 23 of user core. May 8 00:01:03.953467 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:01:06.601649 containerd[1453]: time="2025-05-08T00:01:06.601507143Z" level=info msg="StopContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" with timeout 30 (s)" May 8 00:01:06.602183 containerd[1453]: time="2025-05-08T00:01:06.601858347Z" level=info msg="Stop container \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" with signal terminated" May 8 00:01:06.619632 systemd[1]: cri-containerd-0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f.scope: Deactivated successfully. May 8 00:01:06.647444 systemd[1]: run-containerd-runc-k8s.io-12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5-runc.cs8TBQ.mount: Deactivated successfully. May 8 00:01:06.651730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f-rootfs.mount: Deactivated successfully. May 8 00:01:06.659826 containerd[1453]: time="2025-05-08T00:01:06.659756325Z" level=info msg="shim disconnected" id=0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f namespace=k8s.io May 8 00:01:06.659826 containerd[1453]: time="2025-05-08T00:01:06.659823406Z" level=warning msg="cleaning up after shim disconnected" id=0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f namespace=k8s.io May 8 00:01:06.659826 containerd[1453]: time="2025-05-08T00:01:06.659832846Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:06.665886 containerd[1453]: time="2025-05-08T00:01:06.665701188Z" level=info msg="StopContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" with timeout 2 (s)" May 8 00:01:06.665993 containerd[1453]: time="2025-05-08T00:01:06.665963071Z" level=info msg="Stop container \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" with signal terminated" May 8 00:01:06.671747 systemd-networkd[1389]: lxc_health: Link DOWN May 8 00:01:06.672081 systemd-networkd[1389]: lxc_health: Lost carrier May 8 00:01:06.688860 systemd[1]: cri-containerd-12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5.scope: Deactivated successfully. May 8 00:01:06.690324 systemd[1]: cri-containerd-12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5.scope: Consumed 6.350s CPU time, 125.1M memory peak, 144K read from disk, 12.9M written to disk. May 8 00:01:06.691951 containerd[1453]: time="2025-05-08T00:01:06.691808867Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:01:06.705806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5-rootfs.mount: Deactivated successfully. May 8 00:01:06.712852 containerd[1453]: time="2025-05-08T00:01:06.712792371Z" level=info msg="shim disconnected" id=12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5 namespace=k8s.io May 8 00:01:06.712852 containerd[1453]: time="2025-05-08T00:01:06.712845092Z" level=warning msg="cleaning up after shim disconnected" id=12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5 namespace=k8s.io May 8 00:01:06.712852 containerd[1453]: time="2025-05-08T00:01:06.712855372Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:06.717720 containerd[1453]: time="2025-05-08T00:01:06.717673183Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:01:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:01:06.721647 containerd[1453]: time="2025-05-08T00:01:06.721616345Z" level=info msg="StopContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" returns successfully" May 8 00:01:06.722652 containerd[1453]: time="2025-05-08T00:01:06.722623596Z" level=info msg="StopPodSandbox for \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\"" May 8 00:01:06.730774 containerd[1453]: time="2025-05-08T00:01:06.730730523Z" level=info msg="Container to stop \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.732599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060-shm.mount: Deactivated successfully. May 8 00:01:06.733753 containerd[1453]: time="2025-05-08T00:01:06.733720995Z" level=info msg="StopContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" returns successfully" May 8 00:01:06.734431 containerd[1453]: time="2025-05-08T00:01:06.734407402Z" level=info msg="StopPodSandbox for \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\"" May 8 00:01:06.734499 containerd[1453]: time="2025-05-08T00:01:06.734440842Z" level=info msg="Container to stop \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.734499 containerd[1453]: time="2025-05-08T00:01:06.734452523Z" level=info msg="Container to stop \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.734499 containerd[1453]: time="2025-05-08T00:01:06.734461203Z" level=info msg="Container to stop \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.734499 containerd[1453]: time="2025-05-08T00:01:06.734469123Z" level=info msg="Container to stop \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.734499 containerd[1453]: time="2025-05-08T00:01:06.734477003Z" level=info msg="Container to stop \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:01:06.736653 systemd[1]: cri-containerd-88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060.scope: Deactivated successfully. May 8 00:01:06.739485 systemd[1]: cri-containerd-d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078.scope: Deactivated successfully. May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.778850797Z" level=info msg="shim disconnected" id=88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060 namespace=k8s.io May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.778909397Z" level=warning msg="cleaning up after shim disconnected" id=88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060 namespace=k8s.io May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.778917677Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.778866797Z" level=info msg="shim disconnected" id=d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078 namespace=k8s.io May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.778994998Z" level=warning msg="cleaning up after shim disconnected" id=d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078 namespace=k8s.io May 8 00:01:06.780301 containerd[1453]: time="2025-05-08T00:01:06.779007958Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:06.797292 containerd[1453]: time="2025-05-08T00:01:06.797234313Z" level=info msg="TearDown network for sandbox \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\" successfully" May 8 00:01:06.797292 containerd[1453]: time="2025-05-08T00:01:06.797284994Z" level=info msg="StopPodSandbox for \"88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060\" returns successfully" May 8 00:01:06.800786 containerd[1453]: time="2025-05-08T00:01:06.800286386Z" level=info msg="TearDown network for sandbox \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" successfully" May 8 00:01:06.800786 containerd[1453]: time="2025-05-08T00:01:06.800309946Z" level=info msg="StopPodSandbox for \"d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078\" returns successfully" May 8 00:01:06.865189 kubelet[2550]: I0508 00:01:06.864588 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-cgroup\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865355 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283c3682-07eb-4a65-b6c1-f0766a0e2485-cilium-config-path\") pod \"283c3682-07eb-4a65-b6c1-f0766a0e2485\" (UID: \"283c3682-07eb-4a65-b6c1-f0766a0e2485\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865421 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-etc-cni-netd\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865440 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-run\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865476 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hubble-tls\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865497 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s86s\" (UniqueName: \"kubernetes.io/projected/283c3682-07eb-4a65-b6c1-f0766a0e2485-kube-api-access-9s86s\") pod \"283c3682-07eb-4a65-b6c1-f0766a0e2485\" (UID: \"283c3682-07eb-4a65-b6c1-f0766a0e2485\") " May 8 00:01:06.865544 kubelet[2550]: I0508 00:01:06.865517 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hostproc\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865533 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cni-path\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865577 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-config-path\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865593 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-bpf-maps\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865611 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-xtables-lock\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865644 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-net\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865681 kubelet[2550]: I0508 00:01:06.865664 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-kernel\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865806 kubelet[2550]: I0508 00:01:06.865679 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-lib-modules\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865806 kubelet[2550]: I0508 00:01:06.865696 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-clustermesh-secrets\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.865806 kubelet[2550]: I0508 00:01:06.865735 2550 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tkmz\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz\") pod \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\" (UID: \"2a1d1d83-bccf-4511-9e39-3c2e49cae2bd\") " May 8 00:01:06.868900 kubelet[2550]: I0508 00:01:06.868571 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.868900 kubelet[2550]: I0508 00:01:06.868630 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.868900 kubelet[2550]: I0508 00:01:06.868646 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.868900 kubelet[2550]: I0508 00:01:06.868661 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870404 kubelet[2550]: I0508 00:01:06.870366 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/283c3682-07eb-4a65-b6c1-f0766a0e2485-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "283c3682-07eb-4a65-b6c1-f0766a0e2485" (UID: "283c3682-07eb-4a65-b6c1-f0766a0e2485"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:01:06.870451 kubelet[2550]: I0508 00:01:06.870426 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870451 kubelet[2550]: I0508 00:01:06.870445 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870499 kubelet[2550]: I0508 00:01:06.870459 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870499 kubelet[2550]: I0508 00:01:06.870474 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870499 kubelet[2550]: I0508 00:01:06.870488 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.870578 kubelet[2550]: I0508 00:01:06.870503 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:01:06.872420 kubelet[2550]: I0508 00:01:06.872384 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:01:06.872490 kubelet[2550]: I0508 00:01:06.872472 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz" (OuterVolumeSpecName: "kube-api-access-8tkmz") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "kube-api-access-8tkmz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:01:06.873120 kubelet[2550]: I0508 00:01:06.873082 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:01:06.873120 kubelet[2550]: I0508 00:01:06.873089 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283c3682-07eb-4a65-b6c1-f0766a0e2485-kube-api-access-9s86s" (OuterVolumeSpecName: "kube-api-access-9s86s") pod "283c3682-07eb-4a65-b6c1-f0766a0e2485" (UID: "283c3682-07eb-4a65-b6c1-f0766a0e2485"). InnerVolumeSpecName "kube-api-access-9s86s". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:01:06.874674 kubelet[2550]: I0508 00:01:06.874646 2550 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" (UID: "2a1d1d83-bccf-4511-9e39-3c2e49cae2bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:01:06.966292 kubelet[2550]: I0508 00:01:06.966249 2550 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966292 kubelet[2550]: I0508 00:01:06.966293 2550 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966305 2550 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966316 2550 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966324 2550 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966332 2550 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8tkmz\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-kube-api-access-8tkmz\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966340 2550 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966347 2550 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966356 2550 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966387 kubelet[2550]: I0508 00:01:06.966365 2550 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/283c3682-07eb-4a65-b6c1-f0766a0e2485-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966374 2550 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966382 2550 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966389 2550 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966397 2550 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966405 2550 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:01:06.966548 kubelet[2550]: I0508 00:01:06.966413 2550 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9s86s\" (UniqueName: \"kubernetes.io/projected/283c3682-07eb-4a65-b6c1-f0766a0e2485-kube-api-access-9s86s\") on node \"localhost\" DevicePath \"\"" May 8 00:01:07.462728 kubelet[2550]: E0508 00:01:07.462696 2550 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:01:07.642246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078-rootfs.mount: Deactivated successfully. May 8 00:01:07.642366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d266e79968727a00824a45cffd8ca72e67998422514c8a663b0bc00a2c6fb078-shm.mount: Deactivated successfully. May 8 00:01:07.642427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88fd4e6d19c8a61c343f2410d38563d4630f0d1441836388a7eb9a748d07e060-rootfs.mount: Deactivated successfully. May 8 00:01:07.642479 systemd[1]: var-lib-kubelet-pods-2a1d1d83\x2dbccf\x2d4511\x2d9e39\x2d3c2e49cae2bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tkmz.mount: Deactivated successfully. May 8 00:01:07.642529 systemd[1]: var-lib-kubelet-pods-283c3682\x2d07eb\x2d4a65\x2db6c1\x2df0766a0e2485-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9s86s.mount: Deactivated successfully. May 8 00:01:07.642578 systemd[1]: var-lib-kubelet-pods-2a1d1d83\x2dbccf\x2d4511\x2d9e39\x2d3c2e49cae2bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:01:07.642630 systemd[1]: var-lib-kubelet-pods-2a1d1d83\x2dbccf\x2d4511\x2d9e39\x2d3c2e49cae2bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:01:07.645668 kubelet[2550]: I0508 00:01:07.645619 2550 scope.go:117] "RemoveContainer" containerID="12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5" May 8 00:01:07.648562 containerd[1453]: time="2025-05-08T00:01:07.648372172Z" level=info msg="RemoveContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\"" May 8 00:01:07.656757 containerd[1453]: time="2025-05-08T00:01:07.655167682Z" level=info msg="RemoveContainer for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" returns successfully" May 8 00:01:07.656416 systemd[1]: Removed slice kubepods-besteffort-pod283c3682_07eb_4a65_b6c1_f0766a0e2485.slice - libcontainer container kubepods-besteffort-pod283c3682_07eb_4a65_b6c1_f0766a0e2485.slice. May 8 00:01:07.658032 kubelet[2550]: I0508 00:01:07.655436 2550 scope.go:117] "RemoveContainer" containerID="c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f" May 8 00:01:07.658093 containerd[1453]: time="2025-05-08T00:01:07.657628467Z" level=info msg="RemoveContainer for \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\"" May 8 00:01:07.657525 systemd[1]: Removed slice kubepods-burstable-pod2a1d1d83_bccf_4511_9e39_3c2e49cae2bd.slice - libcontainer container kubepods-burstable-pod2a1d1d83_bccf_4511_9e39_3c2e49cae2bd.slice. May 8 00:01:07.657605 systemd[1]: kubepods-burstable-pod2a1d1d83_bccf_4511_9e39_3c2e49cae2bd.slice: Consumed 6.514s CPU time, 125.5M memory peak, 156K read from disk, 12.9M written to disk. May 8 00:01:07.660985 containerd[1453]: time="2025-05-08T00:01:07.660956742Z" level=info msg="RemoveContainer for \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\" returns successfully" May 8 00:01:07.661191 kubelet[2550]: I0508 00:01:07.661139 2550 scope.go:117] "RemoveContainer" containerID="8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597" May 8 00:01:07.662747 containerd[1453]: time="2025-05-08T00:01:07.662715480Z" level=info msg="RemoveContainer for \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\"" May 8 00:01:07.666005 containerd[1453]: time="2025-05-08T00:01:07.665975034Z" level=info msg="RemoveContainer for \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\" returns successfully" May 8 00:01:07.666304 kubelet[2550]: I0508 00:01:07.666261 2550 scope.go:117] "RemoveContainer" containerID="3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3" May 8 00:01:07.669341 containerd[1453]: time="2025-05-08T00:01:07.669314988Z" level=info msg="RemoveContainer for \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\"" May 8 00:01:07.678813 containerd[1453]: time="2025-05-08T00:01:07.678715205Z" level=info msg="RemoveContainer for \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\" returns successfully" May 8 00:01:07.678915 kubelet[2550]: I0508 00:01:07.678888 2550 scope.go:117] "RemoveContainer" containerID="b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756" May 8 00:01:07.680616 containerd[1453]: time="2025-05-08T00:01:07.680591944Z" level=info msg="RemoveContainer for \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\"" May 8 00:01:07.682850 containerd[1453]: time="2025-05-08T00:01:07.682811687Z" level=info msg="RemoveContainer for \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\" returns successfully" May 8 00:01:07.683141 kubelet[2550]: I0508 00:01:07.683030 2550 scope.go:117] "RemoveContainer" containerID="12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5" May 8 00:01:07.683344 containerd[1453]: time="2025-05-08T00:01:07.683312813Z" level=error msg="ContainerStatus for \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\": not found" May 8 00:01:07.692418 kubelet[2550]: E0508 00:01:07.692391 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\": not found" containerID="12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5" May 8 00:01:07.692507 kubelet[2550]: I0508 00:01:07.692428 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5"} err="failed to get container status \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"12fb5df3978f26ad537d0efb9d22797deeac3886a8dae4bd62d624e166bac8d5\": not found" May 8 00:01:07.692540 kubelet[2550]: I0508 00:01:07.692509 2550 scope.go:117] "RemoveContainer" containerID="c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f" May 8 00:01:07.692900 containerd[1453]: time="2025-05-08T00:01:07.692801070Z" level=error msg="ContainerStatus for \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\": not found" May 8 00:01:07.696019 kubelet[2550]: E0508 00:01:07.692932 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\": not found" containerID="c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f" May 8 00:01:07.696065 kubelet[2550]: I0508 00:01:07.696027 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f"} err="failed to get container status \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3260e6350401ee5a4601d75230ea605ae62d1e8b2660dea20baa3e43422d89f\": not found" May 8 00:01:07.696065 kubelet[2550]: I0508 00:01:07.696046 2550 scope.go:117] "RemoveContainer" containerID="8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597" May 8 00:01:07.696306 containerd[1453]: time="2025-05-08T00:01:07.696251906Z" level=error msg="ContainerStatus for \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\": not found" May 8 00:01:07.696418 kubelet[2550]: E0508 00:01:07.696404 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\": not found" containerID="8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597" May 8 00:01:07.696459 kubelet[2550]: I0508 00:01:07.696423 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597"} err="failed to get container status \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\": rpc error: code = NotFound desc = an error occurred when try to find container \"8195e5e4219fd886a05c24735a1c5d1a07dac803c5d923257a5c8810529df597\": not found" May 8 00:01:07.696459 kubelet[2550]: I0508 00:01:07.696437 2550 scope.go:117] "RemoveContainer" containerID="3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3" May 8 00:01:07.696790 containerd[1453]: time="2025-05-08T00:01:07.696629110Z" level=error msg="ContainerStatus for \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\": not found" May 8 00:01:07.696956 kubelet[2550]: E0508 00:01:07.696760 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\": not found" containerID="3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3" May 8 00:01:07.696956 kubelet[2550]: I0508 00:01:07.696898 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3"} err="failed to get container status \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"3889ad8c358f4ec911e9c0844efa6c31cd5b8c00ee697301fe0e7297893cebf3\": not found" May 8 00:01:07.696956 kubelet[2550]: I0508 00:01:07.696915 2550 scope.go:117] "RemoveContainer" containerID="b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756" May 8 00:01:07.697453 containerd[1453]: time="2025-05-08T00:01:07.697373998Z" level=error msg="ContainerStatus for \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\": not found" May 8 00:01:07.697705 kubelet[2550]: E0508 00:01:07.697605 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\": not found" containerID="b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756" May 8 00:01:07.697705 kubelet[2550]: I0508 00:01:07.697630 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756"} err="failed to get container status \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\": rpc error: code = NotFound desc = an error occurred when try to find container \"b151f1c39fe74aa1866251c20f52d1ae54e20a6958a10146e36db9973c2fa756\": not found" May 8 00:01:07.697705 kubelet[2550]: I0508 00:01:07.697644 2550 scope.go:117] "RemoveContainer" containerID="0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f" May 8 00:01:07.698886 containerd[1453]: time="2025-05-08T00:01:07.698851973Z" level=info msg="RemoveContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\"" May 8 00:01:07.702628 containerd[1453]: time="2025-05-08T00:01:07.702576731Z" level=info msg="RemoveContainer for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" returns successfully" May 8 00:01:07.702890 kubelet[2550]: I0508 00:01:07.702866 2550 scope.go:117] "RemoveContainer" containerID="0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f" May 8 00:01:07.703146 containerd[1453]: time="2025-05-08T00:01:07.703116777Z" level=error msg="ContainerStatus for \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\": not found" May 8 00:01:07.703262 kubelet[2550]: E0508 00:01:07.703240 2550 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\": not found" containerID="0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f" May 8 00:01:07.703343 kubelet[2550]: I0508 00:01:07.703267 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f"} err="failed to get container status \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d9dbd4560a8e45e1d9a78377450aa2a47497b0fd951529eb1484efaf615244f\": not found" May 8 00:01:08.420312 kubelet[2550]: I0508 00:01:08.420178 2550 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283c3682-07eb-4a65-b6c1-f0766a0e2485" path="/var/lib/kubelet/pods/283c3682-07eb-4a65-b6c1-f0766a0e2485/volumes" May 8 00:01:08.420628 kubelet[2550]: I0508 00:01:08.420600 2550 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" path="/var/lib/kubelet/pods/2a1d1d83-bccf-4511-9e39-3c2e49cae2bd/volumes" May 8 00:01:08.565923 sshd[4212]: Connection closed by 10.0.0.1 port 49448 May 8 00:01:08.567143 sshd-session[4209]: pam_unix(sshd:session): session closed for user core May 8 00:01:08.582829 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:49448.service: Deactivated successfully. May 8 00:01:08.584443 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:01:08.584658 systemd[1]: session-23.scope: Consumed 2.002s CPU time, 28.7M memory peak. May 8 00:01:08.585762 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. May 8 00:01:08.595555 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:49460.service - OpenSSH per-connection server daemon (10.0.0.1:49460). May 8 00:01:08.596602 systemd-logind[1439]: Removed session 23. May 8 00:01:08.637382 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 49460 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:01:08.638493 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:01:08.642943 systemd-logind[1439]: New session 24 of user core. May 8 00:01:08.651560 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:01:09.603088 sshd[4376]: Connection closed by 10.0.0.1 port 49460 May 8 00:01:09.603790 sshd-session[4373]: pam_unix(sshd:session): session closed for user core May 8 00:01:09.616078 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:49460.service: Deactivated successfully. May 8 00:01:09.619426 kubelet[2550]: I0508 00:01:09.618825 2550 memory_manager.go:355] "RemoveStaleState removing state" podUID="283c3682-07eb-4a65-b6c1-f0766a0e2485" containerName="cilium-operator" May 8 00:01:09.619426 kubelet[2550]: I0508 00:01:09.618855 2550 memory_manager.go:355] "RemoveStaleState removing state" podUID="2a1d1d83-bccf-4511-9e39-3c2e49cae2bd" containerName="cilium-agent" May 8 00:01:09.620956 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:01:09.625808 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. May 8 00:01:09.634590 systemd[1]: Started sshd@24-10.0.0.121:22-10.0.0.1:49474.service - OpenSSH per-connection server daemon (10.0.0.1:49474). May 8 00:01:09.643422 systemd-logind[1439]: Removed session 24. May 8 00:01:09.648676 systemd[1]: Created slice kubepods-burstable-pod8aa28ca2_1b35_4bee_83f5_fca58bd060ae.slice - libcontainer container kubepods-burstable-pod8aa28ca2_1b35_4bee_83f5_fca58bd060ae.slice. May 8 00:01:09.682144 kubelet[2550]: I0508 00:01:09.682095 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-hostproc\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682144 kubelet[2550]: I0508 00:01:09.682137 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-etc-cni-netd\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682302 kubelet[2550]: I0508 00:01:09.682157 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-hubble-tls\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682302 kubelet[2550]: I0508 00:01:09.682175 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-cilium-config-path\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682302 kubelet[2550]: I0508 00:01:09.682190 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-host-proc-sys-kernel\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682302 kubelet[2550]: I0508 00:01:09.682205 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-clustermesh-secrets\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682302 kubelet[2550]: I0508 00:01:09.682222 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-cilium-ipsec-secrets\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682236 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-lib-modules\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682250 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwr4l\" (UniqueName: \"kubernetes.io/projected/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-kube-api-access-dwr4l\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682284 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-cni-path\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682303 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-xtables-lock\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682321 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-host-proc-sys-net\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682410 kubelet[2550]: I0508 00:01:09.682337 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-cilium-run\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682530 kubelet[2550]: I0508 00:01:09.682354 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-bpf-maps\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.682530 kubelet[2550]: I0508 00:01:09.682370 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aa28ca2-1b35-4bee-83f5-fca58bd060ae-cilium-cgroup\") pod \"cilium-g4hg2\" (UID: \"8aa28ca2-1b35-4bee-83f5-fca58bd060ae\") " pod="kube-system/cilium-g4hg2" May 8 00:01:09.688203 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 49474 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:01:09.689403 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:01:09.693336 systemd-logind[1439]: New session 25 of user core. May 8 00:01:09.703433 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:01:09.752951 sshd[4390]: Connection closed by 10.0.0.1 port 49474 May 8 00:01:09.753413 sshd-session[4387]: pam_unix(sshd:session): session closed for user core May 8 00:01:09.763656 systemd[1]: sshd@24-10.0.0.121:22-10.0.0.1:49474.service: Deactivated successfully. May 8 00:01:09.765513 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:01:09.766890 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. May 8 00:01:09.777544 systemd[1]: Started sshd@25-10.0.0.121:22-10.0.0.1:49478.service - OpenSSH per-connection server daemon (10.0.0.1:49478). May 8 00:01:09.778591 systemd-logind[1439]: Removed session 25. May 8 00:01:09.828854 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 49478 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 8 00:01:09.829969 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:01:09.833613 systemd-logind[1439]: New session 26 of user core. May 8 00:01:09.848420 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:01:09.953608 containerd[1453]: time="2025-05-08T00:01:09.953487455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4hg2,Uid:8aa28ca2-1b35-4bee-83f5-fca58bd060ae,Namespace:kube-system,Attempt:0,}" May 8 00:01:09.973236 containerd[1453]: time="2025-05-08T00:01:09.972631319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:01:09.973236 containerd[1453]: time="2025-05-08T00:01:09.973031843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:01:09.973236 containerd[1453]: time="2025-05-08T00:01:09.973044003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:01:09.973236 containerd[1453]: time="2025-05-08T00:01:09.973135004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:01:09.993493 systemd[1]: Started cri-containerd-6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73.scope - libcontainer container 6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73. May 8 00:01:10.012003 containerd[1453]: time="2025-05-08T00:01:10.011966935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4hg2,Uid:8aa28ca2-1b35-4bee-83f5-fca58bd060ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\"" May 8 00:01:10.016081 containerd[1453]: time="2025-05-08T00:01:10.015964652Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:01:10.024969 containerd[1453]: time="2025-05-08T00:01:10.024933056Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b\"" May 8 00:01:10.025468 containerd[1453]: time="2025-05-08T00:01:10.025380660Z" level=info msg="StartContainer for \"166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b\"" May 8 00:01:10.049428 systemd[1]: Started cri-containerd-166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b.scope - libcontainer container 166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b. May 8 00:01:10.070534 containerd[1453]: time="2025-05-08T00:01:10.070484000Z" level=info msg="StartContainer for \"166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b\" returns successfully" May 8 00:01:10.087267 systemd[1]: cri-containerd-166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b.scope: Deactivated successfully. May 8 00:01:10.120829 containerd[1453]: time="2025-05-08T00:01:10.120771589Z" level=info msg="shim disconnected" id=166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b namespace=k8s.io May 8 00:01:10.120829 containerd[1453]: time="2025-05-08T00:01:10.120826949Z" level=warning msg="cleaning up after shim disconnected" id=166fed42c356d0c7bb99089f6f70c4ddbf157ad66ededae7fb1288eeff851b4b namespace=k8s.io May 8 00:01:10.120829 containerd[1453]: time="2025-05-08T00:01:10.120836229Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:10.131316 containerd[1453]: time="2025-05-08T00:01:10.130357878Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:01:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:01:10.664049 containerd[1453]: time="2025-05-08T00:01:10.663788489Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:01:10.686408 containerd[1453]: time="2025-05-08T00:01:10.686355100Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11\"" May 8 00:01:10.686905 containerd[1453]: time="2025-05-08T00:01:10.686861064Z" level=info msg="StartContainer for \"a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11\"" May 8 00:01:10.716531 systemd[1]: Started cri-containerd-a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11.scope - libcontainer container a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11. May 8 00:01:10.738987 containerd[1453]: time="2025-05-08T00:01:10.738943710Z" level=info msg="StartContainer for \"a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11\" returns successfully" May 8 00:01:10.746778 systemd[1]: cri-containerd-a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11.scope: Deactivated successfully. May 8 00:01:10.766207 containerd[1453]: time="2025-05-08T00:01:10.766139643Z" level=info msg="shim disconnected" id=a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11 namespace=k8s.io May 8 00:01:10.766207 containerd[1453]: time="2025-05-08T00:01:10.766191964Z" level=warning msg="cleaning up after shim disconnected" id=a76323c3ebe2efb0daee834975502041b9b808fd2791a0c976479462a0201d11 namespace=k8s.io May 8 00:01:10.766207 containerd[1453]: time="2025-05-08T00:01:10.766200964Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:11.668872 containerd[1453]: time="2025-05-08T00:01:11.668820966Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:01:11.688137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096002065.mount: Deactivated successfully. May 8 00:01:11.690067 containerd[1453]: time="2025-05-08T00:01:11.690021957Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07\"" May 8 00:01:11.690572 containerd[1453]: time="2025-05-08T00:01:11.690505401Z" level=info msg="StartContainer for \"34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07\"" May 8 00:01:11.727499 systemd[1]: Started cri-containerd-34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07.scope - libcontainer container 34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07. May 8 00:01:11.754840 containerd[1453]: time="2025-05-08T00:01:11.754799020Z" level=info msg="StartContainer for \"34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07\" returns successfully" May 8 00:01:11.755944 systemd[1]: cri-containerd-34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07.scope: Deactivated successfully. May 8 00:01:11.786945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07-rootfs.mount: Deactivated successfully. May 8 00:01:11.790581 containerd[1453]: time="2025-05-08T00:01:11.790529862Z" level=info msg="shim disconnected" id=34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07 namespace=k8s.io May 8 00:01:11.790581 containerd[1453]: time="2025-05-08T00:01:11.790579942Z" level=warning msg="cleaning up after shim disconnected" id=34e7e1013cc80c99f65b6dcef7a2831ca3fec1fafce6f8b8e75cdc1c36126a07 namespace=k8s.io May 8 00:01:11.790767 containerd[1453]: time="2025-05-08T00:01:11.790589342Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:12.464074 kubelet[2550]: E0508 00:01:12.464040 2550 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:01:12.670375 containerd[1453]: time="2025-05-08T00:01:12.670327781Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:01:12.689423 containerd[1453]: time="2025-05-08T00:01:12.689373107Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed\"" May 8 00:01:12.690265 containerd[1453]: time="2025-05-08T00:01:12.690117033Z" level=info msg="StartContainer for \"c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed\"" May 8 00:01:12.718486 systemd[1]: Started cri-containerd-c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed.scope - libcontainer container c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed. May 8 00:01:12.742044 containerd[1453]: time="2025-05-08T00:01:12.741920484Z" level=info msg="StartContainer for \"c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed\" returns successfully" May 8 00:01:12.742489 systemd[1]: cri-containerd-c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed.scope: Deactivated successfully. May 8 00:01:12.768318 containerd[1453]: time="2025-05-08T00:01:12.768250873Z" level=info msg="shim disconnected" id=c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed namespace=k8s.io May 8 00:01:12.768672 containerd[1453]: time="2025-05-08T00:01:12.768510955Z" level=warning msg="cleaning up after shim disconnected" id=c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed namespace=k8s.io May 8 00:01:12.768672 containerd[1453]: time="2025-05-08T00:01:12.768528956Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:01:12.786927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c88656fc6def54702076c0e4ba8802456483d73dcbefebfc46ad780c76eb66ed-rootfs.mount: Deactivated successfully. May 8 00:01:13.675568 containerd[1453]: time="2025-05-08T00:01:13.675513849Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:01:13.695359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321736961.mount: Deactivated successfully. May 8 00:01:13.710094 containerd[1453]: time="2025-05-08T00:01:13.710037379Z" level=info msg="CreateContainer within sandbox \"6fd23fdde52f6d11697c9ca5590571cec31ad57d63745016940367a21b057d73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84\"" May 8 00:01:13.711468 containerd[1453]: time="2025-05-08T00:01:13.711423391Z" level=info msg="StartContainer for \"32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84\"" May 8 00:01:13.749452 systemd[1]: Started cri-containerd-32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84.scope - libcontainer container 32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84. May 8 00:01:13.773222 containerd[1453]: time="2025-05-08T00:01:13.773179350Z" level=info msg="StartContainer for \"32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84\" returns successfully" May 8 00:01:14.048309 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 00:01:14.235104 kubelet[2550]: I0508 00:01:14.234665 2550 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:01:14Z","lastTransitionTime":"2025-05-08T00:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:01:16.181652 systemd[1]: run-containerd-runc-k8s.io-32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84-runc.rM3eIs.mount: Deactivated successfully. May 8 00:01:16.919009 systemd-networkd[1389]: lxc_health: Link UP May 8 00:01:16.923421 systemd-networkd[1389]: lxc_health: Gained carrier May 8 00:01:18.015428 kubelet[2550]: I0508 00:01:18.014392 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g4hg2" podStartSLOduration=9.014373942 podStartE2EDuration="9.014373942s" podCreationTimestamp="2025-05-08 00:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:01:14.691722235 +0000 UTC m=+82.354049564" watchObservedRunningTime="2025-05-08 00:01:18.014373942 +0000 UTC m=+85.676701271" May 8 00:01:18.963421 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 8 00:01:22.539148 systemd[1]: run-containerd-runc-k8s.io-32c16816ff07519f6bdbe9960c3968199d87606e24b3513957b53c52586c9c84-runc.7N5a7C.mount: Deactivated successfully. May 8 00:01:22.582613 sshd[4403]: Connection closed by 10.0.0.1 port 49478 May 8 00:01:22.583499 sshd-session[4396]: pam_unix(sshd:session): session closed for user core May 8 00:01:22.589070 systemd[1]: sshd@25-10.0.0.121:22-10.0.0.1:49478.service: Deactivated successfully. May 8 00:01:22.591347 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:01:22.592825 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. May 8 00:01:22.594356 systemd-logind[1439]: Removed session 26.