May 15 23:50:37.897301 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 23:50:37.897324 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu May 15 22:19:11 -00 2025 May 15 23:50:37.897334 kernel: KASLR enabled May 15 23:50:37.897340 kernel: efi: EFI v2.7 by EDK II May 15 23:50:37.897345 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 15 23:50:37.897351 kernel: random: crng init done May 15 23:50:37.897358 kernel: secureboot: Secure boot disabled May 15 23:50:37.897363 kernel: ACPI: Early table checksum verification disabled May 15 23:50:37.897369 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 23:50:37.897377 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 23:50:37.897465 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897478 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897484 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897490 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897497 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897508 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897514 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897521 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897527 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:50:37.897533 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 23:50:37.897539 kernel: NUMA: Failed to initialise from firmware May 15 23:50:37.897554 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:50:37.897561 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 23:50:37.897567 kernel: Zone ranges: May 15 23:50:37.897573 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:50:37.897582 kernel: DMA32 empty May 15 23:50:37.897588 kernel: Normal empty May 15 23:50:37.897594 kernel: Movable zone start for each node May 15 23:50:37.897600 kernel: Early memory node ranges May 15 23:50:37.897606 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 15 23:50:37.897613 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 15 23:50:37.897619 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 15 23:50:37.897625 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 23:50:37.897631 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 23:50:37.897637 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 23:50:37.897643 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 23:50:37.897650 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 23:50:37.897657 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 23:50:37.897663 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:50:37.897670 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 23:50:37.897679 kernel: psci: probing for conduit method from ACPI. May 15 23:50:37.897685 kernel: psci: PSCIv1.1 detected in firmware. May 15 23:50:37.897692 kernel: psci: Using standard PSCI v0.2 function IDs May 15 23:50:37.897700 kernel: psci: Trusted OS migration not required May 15 23:50:37.897707 kernel: psci: SMC Calling Convention v1.1 May 15 23:50:37.897713 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 23:50:37.897720 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 23:50:37.897726 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 23:50:37.897733 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 23:50:37.897739 kernel: Detected PIPT I-cache on CPU0 May 15 23:50:37.897746 kernel: CPU features: detected: GIC system register CPU interface May 15 23:50:37.897752 kernel: CPU features: detected: Hardware dirty bit management May 15 23:50:37.897759 kernel: CPU features: detected: Spectre-v4 May 15 23:50:37.897766 kernel: CPU features: detected: Spectre-BHB May 15 23:50:37.897773 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 23:50:37.897780 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 23:50:37.897786 kernel: CPU features: detected: ARM erratum 1418040 May 15 23:50:37.897793 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 23:50:37.897800 kernel: alternatives: applying boot alternatives May 15 23:50:37.897807 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653f7594d0757b3475730807d14614dbb7ee20dc5b8481e45608a3fadcc6677a May 15 23:50:37.897814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:50:37.897821 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:50:37.897828 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:50:37.897835 kernel: Fallback order for Node 0: 0 May 15 23:50:37.897845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 23:50:37.897855 kernel: Policy zone: DMA May 15 23:50:37.897863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:50:37.897870 kernel: software IO TLB: area num 4. May 15 23:50:37.897877 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 23:50:37.897883 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 15 23:50:37.897891 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:50:37.897898 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:50:37.897905 kernel: rcu: RCU event tracing is enabled. May 15 23:50:37.897912 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:50:37.897919 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:50:37.897925 kernel: Tracing variant of Tasks RCU enabled. May 15 23:50:37.897933 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:50:37.897940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:50:37.897947 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 23:50:37.897954 kernel: GICv3: 256 SPIs implemented May 15 23:50:37.897964 kernel: GICv3: 0 Extended SPIs implemented May 15 23:50:37.897971 kernel: Root IRQ handler: gic_handle_irq May 15 23:50:37.897978 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 23:50:37.897985 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 23:50:37.897992 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 23:50:37.897998 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 23:50:37.898005 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 23:50:37.898014 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 23:50:37.898022 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 23:50:37.898030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:50:37.898037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:50:37.898045 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 23:50:37.898054 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 23:50:37.898063 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 23:50:37.898069 kernel: arm-pv: using stolen time PV May 15 23:50:37.898076 kernel: Console: colour dummy device 80x25 May 15 23:50:37.898083 kernel: ACPI: Core revision 20230628 May 15 23:50:37.898091 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 23:50:37.898099 kernel: pid_max: default: 32768 minimum: 301 May 15 23:50:37.898106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:50:37.898115 kernel: landlock: Up and running. May 15 23:50:37.898122 kernel: SELinux: Initializing. May 15 23:50:37.898129 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:50:37.898137 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:50:37.898144 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 23:50:37.898152 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:50:37.898160 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:50:37.898174 kernel: rcu: Hierarchical SRCU implementation. May 15 23:50:37.898186 kernel: rcu: Max phase no-delay instances is 400. May 15 23:50:37.898193 kernel: Platform MSI: ITS@0x8080000 domain created May 15 23:50:37.898200 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 23:50:37.898207 kernel: Remapping and enabling EFI services. May 15 23:50:37.898214 kernel: smp: Bringing up secondary CPUs ... May 15 23:50:37.898221 kernel: Detected PIPT I-cache on CPU1 May 15 23:50:37.898229 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 23:50:37.898236 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 23:50:37.898244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:50:37.898251 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 23:50:37.898263 kernel: Detected PIPT I-cache on CPU2 May 15 23:50:37.898272 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 23:50:37.898279 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 23:50:37.898286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:50:37.898293 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 23:50:37.898300 kernel: Detected PIPT I-cache on CPU3 May 15 23:50:37.898307 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 23:50:37.898314 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 23:50:37.898323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:50:37.898330 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 23:50:37.898337 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:50:37.898344 kernel: SMP: Total of 4 processors activated. May 15 23:50:37.898351 kernel: CPU features: detected: 32-bit EL0 Support May 15 23:50:37.898358 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 23:50:37.898365 kernel: CPU features: detected: Common not Private translations May 15 23:50:37.898374 kernel: CPU features: detected: CRC32 instructions May 15 23:50:37.898382 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 23:50:37.898389 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 23:50:37.898396 kernel: CPU features: detected: LSE atomic instructions May 15 23:50:37.898404 kernel: CPU features: detected: Privileged Access Never May 15 23:50:37.898411 kernel: CPU features: detected: RAS Extension Support May 15 23:50:37.898418 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 23:50:37.898425 kernel: CPU: All CPU(s) started at EL1 May 15 23:50:37.898432 kernel: alternatives: applying system-wide alternatives May 15 23:50:37.898486 kernel: devtmpfs: initialized May 15 23:50:37.898494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:50:37.898501 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:50:37.898508 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:50:37.898515 kernel: SMBIOS 3.0.0 present. May 15 23:50:37.898523 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 23:50:37.898530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:50:37.898537 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 23:50:37.898559 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 23:50:37.898570 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 23:50:37.898577 kernel: audit: initializing netlink subsys (disabled) May 15 23:50:37.898584 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 15 23:50:37.898591 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:50:37.898599 kernel: cpuidle: using governor menu May 15 23:50:37.898606 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 23:50:37.898613 kernel: ASID allocator initialised with 32768 entries May 15 23:50:37.898621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:50:37.898628 kernel: Serial: AMBA PL011 UART driver May 15 23:50:37.898637 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 23:50:37.898644 kernel: Modules: 0 pages in range for non-PLT usage May 15 23:50:37.898651 kernel: Modules: 509264 pages in range for PLT usage May 15 23:50:37.898658 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:50:37.898665 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:50:37.898672 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 23:50:37.898680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 23:50:37.898687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:50:37.898694 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:50:37.898703 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 23:50:37.898710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 23:50:37.898717 kernel: ACPI: Added _OSI(Module Device) May 15 23:50:37.898724 kernel: ACPI: Added _OSI(Processor Device) May 15 23:50:37.898731 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:50:37.898739 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:50:37.898746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:50:37.898753 kernel: ACPI: Interpreter enabled May 15 23:50:37.898760 kernel: ACPI: Using GIC for interrupt routing May 15 23:50:37.898767 kernel: ACPI: MCFG table detected, 1 entries May 15 23:50:37.898776 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 23:50:37.898783 kernel: printk: console [ttyAMA0] enabled May 15 23:50:37.898790 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:50:37.898954 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:50:37.899030 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 23:50:37.899098 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 23:50:37.899163 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 23:50:37.899234 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 23:50:37.899243 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 23:50:37.899251 kernel: PCI host bridge to bus 0000:00 May 15 23:50:37.899323 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 23:50:37.899386 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 23:50:37.899463 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 23:50:37.899526 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:50:37.899622 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 23:50:37.899701 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:50:37.899773 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 23:50:37.899844 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 23:50:37.899915 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:50:37.899983 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:50:37.900052 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 23:50:37.900128 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 23:50:37.900192 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 23:50:37.900251 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 23:50:37.900309 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 23:50:37.900318 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 23:50:37.900326 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 23:50:37.900333 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 23:50:37.900344 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 23:50:37.900352 kernel: iommu: Default domain type: Translated May 15 23:50:37.900359 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 23:50:37.900366 kernel: efivars: Registered efivars operations May 15 23:50:37.900373 kernel: vgaarb: loaded May 15 23:50:37.900392 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 23:50:37.900399 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:50:37.900407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:50:37.900415 kernel: pnp: PnP ACPI init May 15 23:50:37.900723 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 23:50:37.900810 kernel: pnp: PnP ACPI: found 1 devices May 15 23:50:37.900900 kernel: NET: Registered PF_INET protocol family May 15 23:50:37.900910 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:50:37.900918 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:50:37.900965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:50:37.900978 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:50:37.900986 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:50:37.901002 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:50:37.901009 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:50:37.901017 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:50:37.901024 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:50:37.901031 kernel: PCI: CLS 0 bytes, default 64 May 15 23:50:37.901038 kernel: kvm [1]: HYP mode not available May 15 23:50:37.901045 kernel: Initialise system trusted keyrings May 15 23:50:37.901053 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:50:37.901060 kernel: Key type asymmetric registered May 15 23:50:37.901069 kernel: Asymmetric key parser 'x509' registered May 15 23:50:37.901077 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 23:50:37.901116 kernel: io scheduler mq-deadline registered May 15 23:50:37.901125 kernel: io scheduler kyber registered May 15 23:50:37.901181 kernel: io scheduler bfq registered May 15 23:50:37.901189 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 23:50:37.901200 kernel: ACPI: button: Power Button [PWRB] May 15 23:50:37.901208 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 23:50:37.901320 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 23:50:37.901337 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:50:37.901345 kernel: thunder_xcv, ver 1.0 May 15 23:50:37.901352 kernel: thunder_bgx, ver 1.0 May 15 23:50:37.901359 kernel: nicpf, ver 1.0 May 15 23:50:37.901366 kernel: nicvf, ver 1.0 May 15 23:50:37.901468 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 23:50:37.901544 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T23:50:37 UTC (1747353037) May 15 23:50:37.901564 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 23:50:37.901571 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 23:50:37.901582 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 23:50:37.901589 kernel: watchdog: Hard watchdog permanently disabled May 15 23:50:37.901596 kernel: NET: Registered PF_INET6 protocol family May 15 23:50:37.901603 kernel: Segment Routing with IPv6 May 15 23:50:37.901611 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:50:37.901618 kernel: NET: Registered PF_PACKET protocol family May 15 23:50:37.901625 kernel: Key type dns_resolver registered May 15 23:50:37.901632 kernel: registered taskstats version 1 May 15 23:50:37.901639 kernel: Loading compiled-in X.509 certificates May 15 23:50:37.901648 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: e154392c6c82d5b76378b9fde09118c15fcd54b1' May 15 23:50:37.901655 kernel: Key type .fscrypt registered May 15 23:50:37.901662 kernel: Key type fscrypt-provisioning registered May 15 23:50:37.901670 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:50:37.901677 kernel: ima: Allocated hash algorithm: sha1 May 15 23:50:37.901684 kernel: ima: No architecture policies found May 15 23:50:37.901691 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 23:50:37.901698 kernel: clk: Disabling unused clocks May 15 23:50:37.901707 kernel: Freeing unused kernel memory: 38336K May 15 23:50:37.901714 kernel: Run /init as init process May 15 23:50:37.901721 kernel: with arguments: May 15 23:50:37.901728 kernel: /init May 15 23:50:37.901735 kernel: with environment: May 15 23:50:37.901742 kernel: HOME=/ May 15 23:50:37.901749 kernel: TERM=linux May 15 23:50:37.901756 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:50:37.901764 systemd[1]: Successfully made /usr/ read-only. May 15 23:50:37.901775 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:50:37.901784 systemd[1]: Detected virtualization kvm. May 15 23:50:37.901791 systemd[1]: Detected architecture arm64. May 15 23:50:37.901798 systemd[1]: Running in initrd. May 15 23:50:37.901806 systemd[1]: No hostname configured, using default hostname. May 15 23:50:37.901814 systemd[1]: Hostname set to . May 15 23:50:37.901821 systemd[1]: Initializing machine ID from VM UUID. May 15 23:50:37.901830 systemd[1]: Queued start job for default target initrd.target. May 15 23:50:37.901838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:50:37.901846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:50:37.901854 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:50:37.901862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:50:37.901870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:50:37.901878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:50:37.901888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:50:37.901896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:50:37.901904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:50:37.901916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:50:37.901924 systemd[1]: Reached target paths.target - Path Units. May 15 23:50:37.901932 systemd[1]: Reached target slices.target - Slice Units. May 15 23:50:37.901939 systemd[1]: Reached target swap.target - Swaps. May 15 23:50:37.901947 systemd[1]: Reached target timers.target - Timer Units. May 15 23:50:37.901954 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:50:37.901963 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:50:37.901971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:50:37.901979 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 23:50:37.901987 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:50:37.901994 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:50:37.902002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:50:37.902010 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:50:37.902018 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:50:37.902028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:50:37.902036 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:50:37.902044 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:50:37.902052 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:50:37.902059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:50:37.902067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:50:37.902075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:50:37.902082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:50:37.902092 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:50:37.902124 systemd-journald[239]: Collecting audit messages is disabled. May 15 23:50:37.902146 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:50:37.902154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:50:37.902162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:50:37.902169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:50:37.902177 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:50:37.902186 systemd-journald[239]: Journal started May 15 23:50:37.902206 systemd-journald[239]: Runtime Journal (/run/log/journal/7ed41866ee6d4300960b184a2f462980) is 5.9M, max 47.3M, 41.4M free. May 15 23:50:37.884492 systemd-modules-load[240]: Inserted module 'overlay' May 15 23:50:37.905471 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:50:37.905518 kernel: Bridge firewalling registered May 15 23:50:37.905723 systemd-modules-load[240]: Inserted module 'br_netfilter' May 15 23:50:37.906832 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:50:37.911036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:50:37.914853 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:50:37.918685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:50:37.922120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:50:37.925178 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:50:37.926998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:50:37.930034 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:50:37.933515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:50:37.936820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:50:37.946972 dracut-cmdline[276]: dracut-dracut-053 May 15 23:50:37.949614 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653f7594d0757b3475730807d14614dbb7ee20dc5b8481e45608a3fadcc6677a May 15 23:50:37.971496 systemd-resolved[280]: Positive Trust Anchors: May 15 23:50:37.971516 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:50:37.971556 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:50:37.976426 systemd-resolved[280]: Defaulting to hostname 'linux'. May 15 23:50:37.977820 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:50:37.979357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:50:38.025468 kernel: SCSI subsystem initialized May 15 23:50:38.029458 kernel: Loading iSCSI transport class v2.0-870. May 15 23:50:38.039477 kernel: iscsi: registered transport (tcp) May 15 23:50:38.053766 kernel: iscsi: registered transport (qla4xxx) May 15 23:50:38.053816 kernel: QLogic iSCSI HBA Driver May 15 23:50:38.098585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:50:38.110692 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:50:38.128710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:50:38.128775 kernel: device-mapper: uevent: version 1.0.3 May 15 23:50:38.129954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:50:38.180482 kernel: raid6: neonx8 gen() 15657 MB/s May 15 23:50:38.194459 kernel: raid6: neonx4 gen() 4943 MB/s May 15 23:50:38.211456 kernel: raid6: neonx2 gen() 11220 MB/s May 15 23:50:38.228466 kernel: raid6: neonx1 gen() 10478 MB/s May 15 23:50:38.245457 kernel: raid6: int64x8 gen() 6377 MB/s May 15 23:50:38.262450 kernel: raid6: int64x4 gen() 7166 MB/s May 15 23:50:38.279457 kernel: raid6: int64x2 gen() 5983 MB/s May 15 23:50:38.296458 kernel: raid6: int64x1 gen() 4858 MB/s May 15 23:50:38.296474 kernel: raid6: using algorithm neonx8 gen() 15657 MB/s May 15 23:50:38.313455 kernel: raid6: .... xor() 11973 MB/s, rmw enabled May 15 23:50:38.313470 kernel: raid6: using neon recovery algorithm May 15 23:50:38.318455 kernel: xor: measuring software checksum speed May 15 23:50:38.318475 kernel: 8regs : 21624 MB/sec May 15 23:50:38.319879 kernel: 32regs : 20138 MB/sec May 15 23:50:38.319898 kernel: arm64_neon : 27927 MB/sec May 15 23:50:38.319907 kernel: xor: using function: arm64_neon (27927 MB/sec) May 15 23:50:38.368602 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:50:38.379214 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:50:38.392623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:50:38.406183 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 23:50:38.409922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:50:38.420657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:50:38.433227 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 15 23:50:38.464494 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:50:38.477624 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:50:38.520850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:50:38.530680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:50:38.542489 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:50:38.543813 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:50:38.544838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:50:38.546368 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:50:38.555631 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:50:38.566489 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:50:38.576365 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 23:50:38.576584 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:50:38.582851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:50:38.586230 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:50:38.586256 kernel: GPT:9289727 != 19775487 May 15 23:50:38.586266 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:50:38.586275 kernel: GPT:9289727 != 19775487 May 15 23:50:38.586293 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:50:38.586303 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:50:38.582983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:50:38.588347 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:50:38.589208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:50:38.589358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:50:38.592714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:50:38.603529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:50:38.606896 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (518) May 15 23:50:38.606923 kernel: BTRFS: device fsid 54e9b26c-64b1-452f-8761-0591b921cd73 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (509) May 15 23:50:38.623343 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:50:38.624706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:50:38.641028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:50:38.654168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:50:38.660986 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:50:38.662017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:50:38.682645 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:50:38.684794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:50:38.689248 disk-uuid[555]: Primary Header is updated. May 15 23:50:38.689248 disk-uuid[555]: Secondary Entries is updated. May 15 23:50:38.689248 disk-uuid[555]: Secondary Header is updated. May 15 23:50:38.694642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:50:38.702430 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:50:39.706025 disk-uuid[556]: The operation has completed successfully. May 15 23:50:39.707068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:50:39.728749 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:50:39.728855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:50:39.770626 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:50:39.773678 sh[576]: Success May 15 23:50:39.789850 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 23:50:39.831547 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:50:39.833461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:50:39.834385 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:50:39.852110 kernel: BTRFS info (device dm-0): first mount of filesystem 54e9b26c-64b1-452f-8761-0591b921cd73 May 15 23:50:39.852161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 23:50:39.852172 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:50:39.852182 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:50:39.853454 kernel: BTRFS info (device dm-0): using free space tree May 15 23:50:39.859978 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:50:39.861174 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:50:39.874630 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:50:39.876118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:50:39.890667 kernel: BTRFS info (device vda6): first mount of filesystem 07b3022d-6f0a-4ed7-9b55-ca4cb78ef0c2 May 15 23:50:39.890725 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:50:39.890736 kernel: BTRFS info (device vda6): using free space tree May 15 23:50:39.893474 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:50:39.897482 kernel: BTRFS info (device vda6): last unmount of filesystem 07b3022d-6f0a-4ed7-9b55-ca4cb78ef0c2 May 15 23:50:39.902511 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:50:39.910660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:50:39.975600 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:50:39.984659 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:50:40.009857 ignition[664]: Ignition 2.20.0 May 15 23:50:40.009867 ignition[664]: Stage: fetch-offline May 15 23:50:40.009906 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 15 23:50:40.009915 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:40.010091 ignition[664]: parsed url from cmdline: "" May 15 23:50:40.010094 ignition[664]: no config URL provided May 15 23:50:40.010099 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:50:40.010106 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 15 23:50:40.014488 systemd-networkd[768]: lo: Link UP May 15 23:50:40.010130 ignition[664]: op(1): [started] loading QEMU firmware config module May 15 23:50:40.014492 systemd-networkd[768]: lo: Gained carrier May 15 23:50:40.010153 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:50:40.015309 systemd-networkd[768]: Enumeration completed May 15 23:50:40.016054 ignition[664]: op(1): [finished] loading QEMU firmware config module May 15 23:50:40.015534 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:50:40.016339 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:50:40.016342 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:50:40.018248 systemd[1]: Reached target network.target - Network. May 15 23:50:40.019588 systemd-networkd[768]: eth0: Link UP May 15 23:50:40.019592 systemd-networkd[768]: eth0: Gained carrier May 15 23:50:40.019600 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:50:40.039525 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:50:40.062967 ignition[664]: parsing config with SHA512: e3f0d1370bd0bc89c348cbd23bdd402214f57755e33214ac2c2ec7db98efd5e02220972f50e50f503ad9b4ecc1134d8b5607922ec160fee8d64a37935ff28031 May 15 23:50:40.068418 unknown[664]: fetched base config from "system" May 15 23:50:40.068428 unknown[664]: fetched user config from "qemu" May 15 23:50:40.071598 ignition[664]: fetch-offline: fetch-offline passed May 15 23:50:40.071733 ignition[664]: Ignition finished successfully May 15 23:50:40.074093 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:50:40.075135 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:50:40.080646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:50:40.092914 ignition[776]: Ignition 2.20.0 May 15 23:50:40.092925 ignition[776]: Stage: kargs May 15 23:50:40.093097 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 15 23:50:40.093107 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:40.094121 ignition[776]: kargs: kargs passed May 15 23:50:40.094169 ignition[776]: Ignition finished successfully May 15 23:50:40.096962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:50:40.105598 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:50:40.116269 ignition[785]: Ignition 2.20.0 May 15 23:50:40.116280 ignition[785]: Stage: disks May 15 23:50:40.116463 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 15 23:50:40.116474 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:40.117351 ignition[785]: disks: disks passed May 15 23:50:40.118782 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:50:40.117397 ignition[785]: Ignition finished successfully May 15 23:50:40.120131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:50:40.121253 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:50:40.122566 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:50:40.123892 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:50:40.125309 systemd[1]: Reached target basic.target - Basic System. May 15 23:50:40.132663 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:50:40.143365 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:50:40.147264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:50:40.155614 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:50:40.199458 kernel: EXT4-fs (vda9): mounted filesystem cc821478-533f-4f50-b91e-e741e8a356b8 r/w with ordered data mode. Quota mode: none. May 15 23:50:40.199667 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:50:40.200765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:50:40.209575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:50:40.211563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:50:40.212358 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:50:40.212403 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:50:40.212429 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:50:40.217867 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:50:40.219802 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:50:40.223561 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) May 15 23:50:40.223583 kernel: BTRFS info (device vda6): first mount of filesystem 07b3022d-6f0a-4ed7-9b55-ca4cb78ef0c2 May 15 23:50:40.223601 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:50:40.223611 kernel: BTRFS info (device vda6): using free space tree May 15 23:50:40.226449 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:50:40.226843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:50:40.262842 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:50:40.267139 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 15 23:50:40.271413 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:50:40.275577 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:50:40.352159 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:50:40.359627 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:50:40.362180 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:50:40.365458 kernel: BTRFS info (device vda6): last unmount of filesystem 07b3022d-6f0a-4ed7-9b55-ca4cb78ef0c2 May 15 23:50:40.383487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:50:40.385171 ignition[917]: INFO : Ignition 2.20.0 May 15 23:50:40.385171 ignition[917]: INFO : Stage: mount May 15 23:50:40.385171 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:50:40.385171 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:40.385171 ignition[917]: INFO : mount: mount passed May 15 23:50:40.385171 ignition[917]: INFO : Ignition finished successfully May 15 23:50:40.386395 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:50:40.393559 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:50:40.986899 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:50:40.997678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:50:41.004094 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (932) May 15 23:50:41.004128 kernel: BTRFS info (device vda6): first mount of filesystem 07b3022d-6f0a-4ed7-9b55-ca4cb78ef0c2 May 15 23:50:41.004139 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:50:41.004753 kernel: BTRFS info (device vda6): using free space tree May 15 23:50:41.007456 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:50:41.008382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:50:41.023690 ignition[949]: INFO : Ignition 2.20.0 May 15 23:50:41.023690 ignition[949]: INFO : Stage: files May 15 23:50:41.025215 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:50:41.025215 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:41.025215 ignition[949]: DEBUG : files: compiled without relabeling support, skipping May 15 23:50:41.028688 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:50:41.028688 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:50:41.028688 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:50:41.028688 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:50:41.028688 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:50:41.028056 unknown[949]: wrote ssh authorized keys file for user: core May 15 23:50:41.035958 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:50:41.035958 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 23:50:41.082299 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:50:41.217043 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:50:41.217043 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:50:41.220066 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 23:50:41.407613 systemd-networkd[768]: eth0: Gained IPv6LL May 15 23:50:41.541453 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:50:41.614824 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:50:41.616290 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 15 23:50:42.028546 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:50:42.478683 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:50:42.478683 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:50:42.481331 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:50:42.498588 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:50:42.504139 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:50:42.505263 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:50:42.505263 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 23:50:42.505263 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:50:42.505263 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:50:42.505263 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:50:42.505263 ignition[949]: INFO : files: files passed May 15 23:50:42.505263 ignition[949]: INFO : Ignition finished successfully May 15 23:50:42.505972 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:50:42.512659 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:50:42.515343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:50:42.516827 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:50:42.516919 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:50:42.522961 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:50:42.526380 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:50:42.526380 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:50:42.528726 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:50:42.528633 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:50:42.530029 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:50:42.537663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:50:42.558334 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:50:42.558568 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:50:42.560157 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:50:42.561400 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:50:42.562733 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:50:42.563610 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:50:42.579517 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:50:42.587619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:50:42.595717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:50:42.597364 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:50:42.598359 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:50:42.599673 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:50:42.599809 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:50:42.601616 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:50:42.603184 systemd[1]: Stopped target basic.target - Basic System. May 15 23:50:42.604362 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:50:42.605635 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:50:42.607032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:50:42.608446 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:50:42.609827 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:50:42.611270 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:50:42.612707 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:50:42.613968 systemd[1]: Stopped target swap.target - Swaps. May 15 23:50:42.615061 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:50:42.615196 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:50:42.616892 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:50:42.618283 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:50:42.619684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:50:42.621160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:50:42.622108 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:50:42.622237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:50:42.624326 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:50:42.624471 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:50:42.625908 systemd[1]: Stopped target paths.target - Path Units. May 15 23:50:42.627031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:50:42.631516 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:50:42.632476 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:50:42.634034 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:50:42.635245 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:50:42.635340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:50:42.636447 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:50:42.636539 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:50:42.637662 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:50:42.637783 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:50:42.639045 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:50:42.639155 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:50:42.652662 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:50:42.653338 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:50:42.653500 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:50:42.655821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:50:42.657108 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:50:42.657255 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:50:42.658563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:50:42.658664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:50:42.664251 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:50:42.664355 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:50:42.666680 ignition[1003]: INFO : Ignition 2.20.0 May 15 23:50:42.666680 ignition[1003]: INFO : Stage: umount May 15 23:50:42.666680 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:50:42.666680 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:50:42.669871 ignition[1003]: INFO : umount: umount passed May 15 23:50:42.669871 ignition[1003]: INFO : Ignition finished successfully May 15 23:50:42.669709 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:50:42.669837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:50:42.671676 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:50:42.672086 systemd[1]: Stopped target network.target - Network. May 15 23:50:42.673465 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:50:42.673536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:50:42.674866 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:50:42.674908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:50:42.676079 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:50:42.676115 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:50:42.677240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:50:42.677277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:50:42.678744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:50:42.681008 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:50:42.685000 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:50:42.685113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:50:42.689352 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 23:50:42.689728 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:50:42.689768 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:50:42.692854 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 23:50:42.693077 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:50:42.693204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:50:42.696063 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 23:50:42.696578 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:50:42.696643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:50:42.711577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:50:42.712241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:50:42.712310 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:50:42.713780 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:50:42.713825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:50:42.716319 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:50:42.716395 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:50:42.717846 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:50:42.720880 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 23:50:42.727627 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:50:42.727779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:50:42.739975 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:50:42.740125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:50:42.741785 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:50:42.741859 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:50:42.743683 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:50:42.743747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:50:42.745391 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:50:42.745424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:50:42.746838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:50:42.746886 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:50:42.749070 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:50:42.749113 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:50:42.751310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:50:42.751362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:50:42.753650 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:50:42.753699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:50:42.770624 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:50:42.771479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:50:42.771553 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:50:42.774059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:50:42.774104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:50:42.776987 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:50:42.777103 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:50:42.778789 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:50:42.780801 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:50:42.790412 systemd[1]: Switching root. May 15 23:50:42.813692 systemd-journald[239]: Journal stopped May 15 23:50:43.555998 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 15 23:50:43.556060 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:50:43.556071 kernel: SELinux: policy capability open_perms=1 May 15 23:50:43.556082 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:50:43.556091 kernel: SELinux: policy capability always_check_network=0 May 15 23:50:43.556101 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:50:43.556110 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:50:43.556119 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:50:43.556128 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:50:43.556138 kernel: audit: type=1403 audit(1747353042.971:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:50:43.556158 systemd[1]: Successfully loaded SELinux policy in 30.950ms. May 15 23:50:43.556179 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.049ms. May 15 23:50:43.556190 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:50:43.556201 systemd[1]: Detected virtualization kvm. May 15 23:50:43.556211 systemd[1]: Detected architecture arm64. May 15 23:50:43.556221 systemd[1]: Detected first boot. May 15 23:50:43.556231 systemd[1]: Initializing machine ID from VM UUID. May 15 23:50:43.556242 zram_generator::config[1050]: No configuration found. May 15 23:50:43.556254 kernel: NET: Registered PF_VSOCK protocol family May 15 23:50:43.556264 systemd[1]: Populated /etc with preset unit settings. May 15 23:50:43.556274 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 23:50:43.556284 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:50:43.556295 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:50:43.556306 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:50:43.556316 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:50:43.556327 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:50:43.556337 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:50:43.556350 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:50:43.556360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:50:43.556370 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:50:43.556393 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:50:43.556404 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:50:43.556415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:50:43.556425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:50:43.556562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:50:43.556590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:50:43.556602 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:50:43.556612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:50:43.556623 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 23:50:43.556633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:50:43.556643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:50:43.556654 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:50:43.556664 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:50:43.556677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:50:43.556687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:50:43.556697 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:50:43.556707 systemd[1]: Reached target slices.target - Slice Units. May 15 23:50:43.556717 systemd[1]: Reached target swap.target - Swaps. May 15 23:50:43.556727 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:50:43.556737 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:50:43.556747 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 23:50:43.556757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:50:43.556774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:50:43.556785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:50:43.556795 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:50:43.556805 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:50:43.556815 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:50:43.556825 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:50:43.556835 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:50:43.556845 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:50:43.556855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:50:43.556867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:50:43.556877 systemd[1]: Reached target machines.target - Containers. May 15 23:50:43.556887 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:50:43.556900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:50:43.556910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:50:43.556920 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:50:43.556931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:50:43.556941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:50:43.556953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:50:43.556963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:50:43.556973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:50:43.556983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:50:43.556993 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:50:43.557003 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:50:43.557013 kernel: loop: module loaded May 15 23:50:43.557022 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:50:43.557032 kernel: fuse: init (API version 7.39) May 15 23:50:43.557043 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:50:43.557053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:50:43.557063 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:50:43.557073 kernel: ACPI: bus type drm_connector registered May 15 23:50:43.557083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:50:43.557093 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:50:43.557103 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:50:43.557139 systemd-journald[1118]: Collecting audit messages is disabled. May 15 23:50:43.557166 systemd-journald[1118]: Journal started May 15 23:50:43.557188 systemd-journald[1118]: Runtime Journal (/run/log/journal/7ed41866ee6d4300960b184a2f462980) is 5.9M, max 47.3M, 41.4M free. May 15 23:50:43.377360 systemd[1]: Queued start job for default target multi-user.target. May 15 23:50:43.392286 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:50:43.392702 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:50:43.559766 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 23:50:43.561987 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:50:43.563785 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:50:43.563841 systemd[1]: Stopped verity-setup.service. May 15 23:50:43.568625 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:50:43.568929 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:50:43.569856 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:50:43.570787 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:50:43.571605 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:50:43.572496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:50:43.573505 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:50:43.576476 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:50:43.577656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:50:43.578834 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:50:43.579004 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:50:43.580147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:50:43.580307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:50:43.581805 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:50:43.581976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:50:43.583204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:50:43.583361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:50:43.584826 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:50:43.586529 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:50:43.587579 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:50:43.587734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:50:43.588944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:50:43.590207 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:50:43.591434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:50:43.592628 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 23:50:43.605604 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:50:43.611607 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:50:43.613554 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:50:43.614362 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:50:43.614394 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:50:43.616166 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 23:50:43.618223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:50:43.620252 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:50:43.621242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:50:43.623793 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:50:43.626585 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:50:43.627678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:50:43.630190 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:50:43.631588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:50:43.633672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:50:43.638690 systemd-journald[1118]: Time spent on flushing to /var/log/journal/7ed41866ee6d4300960b184a2f462980 is 23.376ms for 869 entries. May 15 23:50:43.638690 systemd-journald[1118]: System Journal (/var/log/journal/7ed41866ee6d4300960b184a2f462980) is 8M, max 195.6M, 187.6M free. May 15 23:50:43.668450 systemd-journald[1118]: Received client request to flush runtime journal. May 15 23:50:43.668493 kernel: loop0: detected capacity change from 0 to 113512 May 15 23:50:43.636644 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:50:43.644669 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:50:43.647709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:50:43.649257 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:50:43.650699 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:50:43.652183 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:50:43.655485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:50:43.662818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:50:43.664717 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:50:43.667638 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 23:50:43.671729 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:50:43.674633 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:50:43.677504 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:50:43.696664 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:50:43.702840 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 23:50:43.708549 kernel: loop1: detected capacity change from 0 to 123192 May 15 23:50:43.718791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:50:43.720119 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:50:43.738052 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 15 23:50:43.738068 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 15 23:50:43.743873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:50:43.752472 kernel: loop2: detected capacity change from 0 to 203944 May 15 23:50:43.792461 kernel: loop3: detected capacity change from 0 to 113512 May 15 23:50:43.797455 kernel: loop4: detected capacity change from 0 to 123192 May 15 23:50:43.802466 kernel: loop5: detected capacity change from 0 to 203944 May 15 23:50:43.808029 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:50:43.808470 (sd-merge)[1191]: Merged extensions into '/usr'. May 15 23:50:43.811834 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:50:43.811850 systemd[1]: Reloading... May 15 23:50:43.876526 zram_generator::config[1221]: No configuration found. May 15 23:50:43.922362 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:50:43.970412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:50:44.026245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:50:44.026632 systemd[1]: Reloading finished in 214 ms. May 15 23:50:44.041261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:50:44.042768 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:50:44.053960 systemd[1]: Starting ensure-sysext.service... May 15 23:50:44.055742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:50:44.074086 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... May 15 23:50:44.074101 systemd[1]: Reloading... May 15 23:50:44.075485 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:50:44.076044 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:50:44.076831 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:50:44.077158 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 15 23:50:44.077278 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 15 23:50:44.084609 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:50:44.084758 systemd-tmpfiles[1255]: Skipping /boot May 15 23:50:44.093826 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:50:44.093967 systemd-tmpfiles[1255]: Skipping /boot May 15 23:50:44.132464 zram_generator::config[1284]: No configuration found. May 15 23:50:44.218632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:50:44.274679 systemd[1]: Reloading finished in 200 ms. May 15 23:50:44.288204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:50:44.306507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:50:44.314396 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:50:44.316778 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:50:44.319086 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:50:44.322774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:50:44.325820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:50:44.331877 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:50:44.337261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:50:44.339589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:50:44.343542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:50:44.347830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:50:44.348774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:50:44.349800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:50:44.350759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:50:44.351791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:50:44.355213 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:50:44.355398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:50:44.358787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:50:44.358956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:50:44.361820 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:50:44.366742 systemd-udevd[1325]: Using default interface naming scheme 'v255'. May 15 23:50:44.374696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:50:44.384958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:50:44.390728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:50:44.397871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:50:44.399403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:50:44.399660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:50:44.403324 augenrules[1371]: No rules May 15 23:50:44.403663 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:50:44.407765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:50:44.414236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:50:44.417174 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:50:44.417371 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:50:44.419317 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:50:44.422593 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:50:44.425764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:50:44.425942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:50:44.427409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:50:44.427607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:50:44.430014 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:50:44.430584 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:50:44.438596 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:50:44.454231 systemd[1]: Finished ensure-sysext.service. May 15 23:50:44.457460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1361) May 15 23:50:44.458065 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:50:44.460051 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 23:50:44.474740 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:50:44.475577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:50:44.478756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:50:44.485104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:50:44.487197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:50:44.489681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:50:44.492314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:50:44.492367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:50:44.495847 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:50:44.501086 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:50:44.501947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:50:44.506080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:50:44.507900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:50:44.509430 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:50:44.509823 augenrules[1397]: /sbin/augenrules: No change May 15 23:50:44.509814 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:50:44.511069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:50:44.511249 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:50:44.512544 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:50:44.514586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:50:44.522596 augenrules[1425]: No rules May 15 23:50:44.523435 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:50:44.524629 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:50:44.536542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:50:44.542534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:50:44.544798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:50:44.544888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:50:44.552153 systemd-resolved[1324]: Positive Trust Anchors: May 15 23:50:44.552171 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:50:44.552202 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:50:44.567015 systemd-resolved[1324]: Defaulting to hostname 'linux'. May 15 23:50:44.568347 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:50:44.572387 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:50:44.573331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:50:44.614570 systemd-networkd[1410]: lo: Link UP May 15 23:50:44.614583 systemd-networkd[1410]: lo: Gained carrier May 15 23:50:44.615869 systemd-networkd[1410]: Enumeration completed May 15 23:50:44.616328 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:50:44.616339 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:50:44.617596 systemd-networkd[1410]: eth0: Link UP May 15 23:50:44.617609 systemd-networkd[1410]: eth0: Gained carrier May 15 23:50:44.617625 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:50:44.623906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:50:44.624875 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:50:44.626362 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:50:44.632502 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:50:44.632503 systemd[1]: Reached target network.target - Network. May 15 23:50:44.633091 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 23:50:44.633204 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:50:44.634214 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:50:44.634331 systemd-timesyncd[1413]: Initial clock synchronization to Thu 2025-05-15 23:50:44.370144 UTC. May 15 23:50:44.642623 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 23:50:44.644835 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:50:44.646355 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:50:44.651812 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:50:44.659873 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 23:50:44.664194 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:50:44.667791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:50:44.694024 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:50:44.695240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:50:44.697520 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:50:44.698369 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:50:44.699323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:50:44.700476 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:50:44.701359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:50:44.702343 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:50:44.703304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:50:44.703341 systemd[1]: Reached target paths.target - Path Units. May 15 23:50:44.704047 systemd[1]: Reached target timers.target - Timer Units. May 15 23:50:44.705392 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:50:44.707878 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:50:44.711496 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 23:50:44.712677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 23:50:44.713674 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 23:50:44.718595 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:50:44.720319 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 23:50:44.722673 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:50:44.724456 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:50:44.725341 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:50:44.726160 systemd[1]: Reached target basic.target - Basic System. May 15 23:50:44.726923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:50:44.726958 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:50:44.728075 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:50:44.730016 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:50:44.730261 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:50:44.734634 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:50:44.736721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:50:44.737751 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:50:44.741673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:50:44.745703 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:50:44.748478 jq[1459]: false May 15 23:50:44.747701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:50:44.751602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:50:44.756080 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:50:44.758671 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:50:44.759184 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:50:44.765772 dbus-daemon[1458]: [system] SELinux support is enabled May 15 23:50:44.770001 extend-filesystems[1460]: Found loop3 May 15 23:50:44.770001 extend-filesystems[1460]: Found loop4 May 15 23:50:44.770001 extend-filesystems[1460]: Found loop5 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda May 15 23:50:44.770001 extend-filesystems[1460]: Found vda1 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda2 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda3 May 15 23:50:44.770001 extend-filesystems[1460]: Found usr May 15 23:50:44.770001 extend-filesystems[1460]: Found vda4 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda6 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda7 May 15 23:50:44.770001 extend-filesystems[1460]: Found vda9 May 15 23:50:44.770001 extend-filesystems[1460]: Checking size of /dev/vda9 May 15 23:50:44.768667 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:50:44.772028 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:50:44.775407 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:50:44.781122 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:50:44.789876 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:50:44.790100 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:50:44.790431 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:50:44.790652 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:50:44.795956 jq[1478]: true May 15 23:50:44.798900 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:50:44.799101 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:50:44.812708 extend-filesystems[1460]: Resized partition /dev/vda9 May 15 23:50:44.830379 jq[1484]: true May 15 23:50:44.830686 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) May 15 23:50:44.840344 update_engine[1472]: I20250515 23:50:44.840188 1472 main.cc:92] Flatcar Update Engine starting May 15 23:50:44.848149 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:50:44.848174 update_engine[1472]: I20250515 23:50:44.842939 1472 update_check_scheduler.cc:74] Next update check in 8m21s May 15 23:50:44.848203 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) May 15 23:50:44.848915 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:50:44.854401 tar[1482]: linux-arm64/helm May 15 23:50:44.868174 systemd[1]: Started update-engine.service - Update Engine. May 15 23:50:44.869658 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:50:44.869733 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:50:44.870917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:50:44.870943 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:50:44.877340 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) May 15 23:50:44.878021 systemd-logind[1468]: New seat seat0. May 15 23:50:44.882652 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:50:44.884034 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:50:44.892961 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:50:44.912525 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:50:44.912525 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:50:44.912525 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:50:44.917138 extend-filesystems[1460]: Resized filesystem in /dev/vda9 May 15 23:50:44.915909 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:50:44.916136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:50:44.926142 bash[1512]: Updated "/home/core/.ssh/authorized_keys" May 15 23:50:44.927781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:50:44.932092 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:50:44.951313 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:50:45.058072 containerd[1483]: time="2025-05-15T23:50:45.057938588Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:50:45.088286 containerd[1483]: time="2025-05-15T23:50:45.088236071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.090308 containerd[1483]: time="2025-05-15T23:50:45.090101996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:50:45.090308 containerd[1483]: time="2025-05-15T23:50:45.090141449Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:50:45.090308 containerd[1483]: time="2025-05-15T23:50:45.090160248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:50:45.090463 containerd[1483]: time="2025-05-15T23:50:45.090327616Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:50:45.090463 containerd[1483]: time="2025-05-15T23:50:45.090349741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.090463 containerd[1483]: time="2025-05-15T23:50:45.090414105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:50:45.090463 containerd[1483]: time="2025-05-15T23:50:45.090428532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.090684 containerd[1483]: time="2025-05-15T23:50:45.090660264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:50:45.090684 containerd[1483]: time="2025-05-15T23:50:45.090685213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.090732 containerd[1483]: time="2025-05-15T23:50:45.090697861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:50:45.090732 containerd[1483]: time="2025-05-15T23:50:45.090706371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.090822 containerd[1483]: time="2025-05-15T23:50:45.090800828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.091019 containerd[1483]: time="2025-05-15T23:50:45.090999952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:50:45.091148 containerd[1483]: time="2025-05-15T23:50:45.091130884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:50:45.091148 containerd[1483]: time="2025-05-15T23:50:45.091146859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:50:45.091256 containerd[1483]: time="2025-05-15T23:50:45.091235282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:50:45.091297 containerd[1483]: time="2025-05-15T23:50:45.091284753Z" level=info msg="metadata content store policy set" policy=shared May 15 23:50:45.095000 containerd[1483]: time="2025-05-15T23:50:45.094968833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:50:45.095138 containerd[1483]: time="2025-05-15T23:50:45.095024184Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:50:45.095138 containerd[1483]: time="2025-05-15T23:50:45.095040855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:50:45.095138 containerd[1483]: time="2025-05-15T23:50:45.095063057Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:50:45.095138 containerd[1483]: time="2025-05-15T23:50:45.095077330Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:50:45.095301 containerd[1483]: time="2025-05-15T23:50:45.095224314Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095536462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095777941Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095795425Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095809930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095824551Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095838050Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095849731Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095862805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095876807Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095889417Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095901369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095913747Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095939817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:50:45.098516 containerd[1483]: time="2025-05-15T23:50:45.095953665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.095966429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.095978188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.095989792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096002982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096014508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096027195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096039766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096054465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096065489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096077131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096089470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096107340Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096128537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096145131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100005 containerd[1483]: time="2025-05-15T23:50:45.096156116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:50:45.098686 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096348510Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096370249Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096380228Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096392141Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096402160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096432523Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096455422Z" level=info msg="NRI interface is disabled by configuration." May 15 23:50:45.100314 containerd[1483]: time="2025-05-15T23:50:45.096465866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.096792209Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.096839399Z" level=info msg="Connect containerd service" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.096872006Z" level=info msg="using legacy CRI server" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.096878697Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.097099831Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.097743041Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098281506Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098319567Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098323899Z" level=info msg="Start subscribing containerd event" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098368961Z" level=info msg="Start recovering state" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098444890Z" level=info msg="Start event monitor" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098457732Z" level=info msg="Start snapshots syncer" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098467170Z" level=info msg="Start cni network conf syncer for default" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098473900Z" level=info msg="Start streaming server" May 15 23:50:45.100482 containerd[1483]: time="2025-05-15T23:50:45.098602473Z" level=info msg="containerd successfully booted in 0.042257s" May 15 23:50:45.196033 tar[1482]: linux-arm64/LICENSE May 15 23:50:45.196256 tar[1482]: linux-arm64/README.md May 15 23:50:45.206934 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:50:45.685123 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:50:45.705261 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:50:45.717734 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:50:45.723522 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:50:45.725465 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:50:45.728165 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:50:45.740302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:50:45.743084 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:50:45.745279 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 23:50:45.746422 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:50:46.271583 systemd-networkd[1410]: eth0: Gained IPv6LL May 15 23:50:46.274034 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:50:46.275588 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:50:46.287746 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:50:46.290067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:50:46.292111 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:50:46.306998 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:50:46.307619 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:50:46.309711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:50:46.314131 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:50:46.853032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:50:46.854495 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:50:46.855852 systemd[1]: Startup finished in 575ms (kernel) + 5.274s (initrd) + 3.917s (userspace) = 9.767s. May 15 23:50:46.857184 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:50:47.297333 kubelet[1571]: E0515 23:50:47.297201 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:50:47.299669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:50:47.299823 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:50:47.300334 systemd[1]: kubelet.service: Consumed 864ms CPU time, 260.1M memory peak. May 15 23:50:50.455723 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:50:50.457018 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:43680.service - OpenSSH per-connection server daemon (10.0.0.1:43680). May 15 23:50:50.558584 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 43680 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:50.560351 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:50.569685 systemd-logind[1468]: New session 1 of user core. May 15 23:50:50.570630 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:50:50.582657 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:50:50.590866 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:50:50.594757 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:50:50.598732 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:50:50.601524 systemd-logind[1468]: New session c1 of user core. May 15 23:50:50.692980 systemd[1589]: Queued start job for default target default.target. May 15 23:50:50.702383 systemd[1589]: Created slice app.slice - User Application Slice. May 15 23:50:50.702410 systemd[1589]: Reached target paths.target - Paths. May 15 23:50:50.702470 systemd[1589]: Reached target timers.target - Timers. May 15 23:50:50.703734 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:50:50.712803 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:50:50.712867 systemd[1589]: Reached target sockets.target - Sockets. May 15 23:50:50.712907 systemd[1589]: Reached target basic.target - Basic System. May 15 23:50:50.712935 systemd[1589]: Reached target default.target - Main User Target. May 15 23:50:50.712959 systemd[1589]: Startup finished in 106ms. May 15 23:50:50.713352 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:50:50.715048 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:50:50.777392 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:43690.service - OpenSSH per-connection server daemon (10.0.0.1:43690). May 15 23:50:50.816801 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:50.818118 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:50.822849 systemd-logind[1468]: New session 2 of user core. May 15 23:50:50.835693 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:50:50.886366 sshd[1602]: Connection closed by 10.0.0.1 port 43690 May 15 23:50:50.886710 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 15 23:50:50.897930 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:43690.service: Deactivated successfully. May 15 23:50:50.899520 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:50:50.902844 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. May 15 23:50:50.912897 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:43702.service - OpenSSH per-connection server daemon (10.0.0.1:43702). May 15 23:50:50.913875 systemd-logind[1468]: Removed session 2. May 15 23:50:50.949412 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 43702 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:50.950780 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:50.955230 systemd-logind[1468]: New session 3 of user core. May 15 23:50:50.963613 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:50:51.018020 sshd[1610]: Connection closed by 10.0.0.1 port 43702 May 15 23:50:51.017934 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 15 23:50:51.030039 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:43702.service: Deactivated successfully. May 15 23:50:51.031530 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:50:51.032255 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. May 15 23:50:51.043807 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:43704.service - OpenSSH per-connection server daemon (10.0.0.1:43704). May 15 23:50:51.044885 systemd-logind[1468]: Removed session 3. May 15 23:50:51.081425 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43704 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:51.082634 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:51.087369 systemd-logind[1468]: New session 4 of user core. May 15 23:50:51.095621 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:50:51.147737 sshd[1618]: Connection closed by 10.0.0.1 port 43704 May 15 23:50:51.148356 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 15 23:50:51.160197 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:43704.service: Deactivated successfully. May 15 23:50:51.162743 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:50:51.164971 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. May 15 23:50:51.174781 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:43710.service - OpenSSH per-connection server daemon (10.0.0.1:43710). May 15 23:50:51.176408 systemd-logind[1468]: Removed session 4. May 15 23:50:51.211171 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 43710 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:51.212634 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:51.217496 systemd-logind[1468]: New session 5 of user core. May 15 23:50:51.227627 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:50:51.292902 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:50:51.293275 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:50:51.311413 sudo[1627]: pam_unix(sudo:session): session closed for user root May 15 23:50:51.313973 sshd[1626]: Connection closed by 10.0.0.1 port 43710 May 15 23:50:51.315019 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 15 23:50:51.329454 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:43710.service: Deactivated successfully. May 15 23:50:51.332662 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:50:51.333573 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. May 15 23:50:51.342834 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:43722.service - OpenSSH per-connection server daemon (10.0.0.1:43722). May 15 23:50:51.344025 systemd-logind[1468]: Removed session 5. May 15 23:50:51.380576 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 43722 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:51.382035 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:51.386523 systemd-logind[1468]: New session 6 of user core. May 15 23:50:51.402653 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:50:51.458397 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:50:51.458710 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:50:51.462477 sudo[1637]: pam_unix(sudo:session): session closed for user root May 15 23:50:51.472807 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:50:51.473107 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:50:51.498035 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:50:51.523608 augenrules[1659]: No rules May 15 23:50:51.524699 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:50:51.524902 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:50:51.525936 sudo[1636]: pam_unix(sudo:session): session closed for user root May 15 23:50:51.528485 sshd[1635]: Connection closed by 10.0.0.1 port 43722 May 15 23:50:51.529049 sshd-session[1632]: pam_unix(sshd:session): session closed for user core May 15 23:50:51.539096 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:43722.service: Deactivated successfully. May 15 23:50:51.540892 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:50:51.541705 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. May 15 23:50:51.552861 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:43728.service - OpenSSH per-connection server daemon (10.0.0.1:43728). May 15 23:50:51.553849 systemd-logind[1468]: Removed session 6. May 15 23:50:51.588994 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 43728 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:50:51.590261 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:50:51.595099 systemd-logind[1468]: New session 7 of user core. May 15 23:50:51.601612 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:50:51.652759 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:50:51.653032 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:50:52.035704 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:50:52.035781 (dockerd)[1692]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:50:52.316377 dockerd[1692]: time="2025-05-15T23:50:52.316244025Z" level=info msg="Starting up" May 15 23:50:52.478388 dockerd[1692]: time="2025-05-15T23:50:52.478340761Z" level=info msg="Loading containers: start." May 15 23:50:52.674500 kernel: Initializing XFRM netlink socket May 15 23:50:52.745256 systemd-networkd[1410]: docker0: Link UP May 15 23:50:52.776713 dockerd[1692]: time="2025-05-15T23:50:52.776649866Z" level=info msg="Loading containers: done." May 15 23:50:52.790142 dockerd[1692]: time="2025-05-15T23:50:52.790073667Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:50:52.790286 dockerd[1692]: time="2025-05-15T23:50:52.790188440Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 23:50:52.790384 dockerd[1692]: time="2025-05-15T23:50:52.790357500Z" level=info msg="Daemon has completed initialization" May 15 23:50:52.817348 dockerd[1692]: time="2025-05-15T23:50:52.817278617Z" level=info msg="API listen on /run/docker.sock" May 15 23:50:52.817469 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:50:53.461681 containerd[1483]: time="2025-05-15T23:50:53.461644861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 15 23:50:54.117197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305069325.mount: Deactivated successfully. May 15 23:50:55.004374 containerd[1483]: time="2025-05-15T23:50:55.004311463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:55.005254 containerd[1483]: time="2025-05-15T23:50:55.004974695Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 15 23:50:55.005934 containerd[1483]: time="2025-05-15T23:50:55.005901578Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:55.008840 containerd[1483]: time="2025-05-15T23:50:55.008807054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:55.010234 containerd[1483]: time="2025-05-15T23:50:55.010110836Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.548424823s" May 15 23:50:55.010234 containerd[1483]: time="2025-05-15T23:50:55.010149022Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 15 23:50:55.013337 containerd[1483]: time="2025-05-15T23:50:55.013300424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 15 23:50:56.111079 containerd[1483]: time="2025-05-15T23:50:56.111016289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:56.111644 containerd[1483]: time="2025-05-15T23:50:56.111598116Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 15 23:50:56.112387 containerd[1483]: time="2025-05-15T23:50:56.112338370Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:56.117443 containerd[1483]: time="2025-05-15T23:50:56.115769904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:56.117443 containerd[1483]: time="2025-05-15T23:50:56.116916686Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.10358481s" May 15 23:50:56.117443 containerd[1483]: time="2025-05-15T23:50:56.116944672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 15 23:50:56.117874 containerd[1483]: time="2025-05-15T23:50:56.117805919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 15 23:50:57.125469 containerd[1483]: time="2025-05-15T23:50:57.125408469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:57.125886 containerd[1483]: time="2025-05-15T23:50:57.125751176Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 15 23:50:57.128915 containerd[1483]: time="2025-05-15T23:50:57.128848889Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:57.131906 containerd[1483]: time="2025-05-15T23:50:57.131857081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:57.133114 containerd[1483]: time="2025-05-15T23:50:57.133040364Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.015206291s" May 15 23:50:57.133114 containerd[1483]: time="2025-05-15T23:50:57.133072588Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 15 23:50:57.133640 containerd[1483]: time="2025-05-15T23:50:57.133578801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 15 23:50:57.550158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:50:57.560633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:50:57.659995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:50:57.663342 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:50:57.701588 kubelet[1958]: E0515 23:50:57.701536 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:50:57.704702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:50:57.704945 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:50:57.705461 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.1M memory peak. May 15 23:50:58.159260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747517049.mount: Deactivated successfully. May 15 23:50:58.499294 containerd[1483]: time="2025-05-15T23:50:58.499177087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:58.499897 containerd[1483]: time="2025-05-15T23:50:58.499849911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 15 23:50:58.500626 containerd[1483]: time="2025-05-15T23:50:58.500604178Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:58.502517 containerd[1483]: time="2025-05-15T23:50:58.502470978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:58.503453 containerd[1483]: time="2025-05-15T23:50:58.503395450Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.369621946s" May 15 23:50:58.503515 containerd[1483]: time="2025-05-15T23:50:58.503449335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 15 23:50:58.503942 containerd[1483]: time="2025-05-15T23:50:58.503914851Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:50:59.061548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017320614.mount: Deactivated successfully. May 15 23:50:59.723392 containerd[1483]: time="2025-05-15T23:50:59.723341201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:59.726452 containerd[1483]: time="2025-05-15T23:50:59.725394817Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 23:50:59.727635 containerd[1483]: time="2025-05-15T23:50:59.727604037Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:59.730606 containerd[1483]: time="2025-05-15T23:50:59.730577548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:50:59.731781 containerd[1483]: time="2025-05-15T23:50:59.731745454Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.227804231s" May 15 23:50:59.731781 containerd[1483]: time="2025-05-15T23:50:59.731780714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 23:50:59.732222 containerd[1483]: time="2025-05-15T23:50:59.732194597Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:51:00.160230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981839427.mount: Deactivated successfully. May 15 23:51:00.164396 containerd[1483]: time="2025-05-15T23:51:00.164186461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:00.165681 containerd[1483]: time="2025-05-15T23:51:00.165632115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 23:51:00.166515 containerd[1483]: time="2025-05-15T23:51:00.166485419Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:00.169099 containerd[1483]: time="2025-05-15T23:51:00.169065877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:00.170053 containerd[1483]: time="2025-05-15T23:51:00.169912570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 437.674544ms" May 15 23:51:00.170053 containerd[1483]: time="2025-05-15T23:51:00.169942556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 23:51:00.170725 containerd[1483]: time="2025-05-15T23:51:00.170649474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 23:51:00.669986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769273845.mount: Deactivated successfully. May 15 23:51:02.410554 containerd[1483]: time="2025-05-15T23:51:02.410504582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:02.411388 containerd[1483]: time="2025-05-15T23:51:02.411108755Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 23:51:02.412745 containerd[1483]: time="2025-05-15T23:51:02.412714582Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:02.415828 containerd[1483]: time="2025-05-15T23:51:02.415798592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:02.417182 containerd[1483]: time="2025-05-15T23:51:02.417128882Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.246440732s" May 15 23:51:02.417182 containerd[1483]: time="2025-05-15T23:51:02.417162487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 23:51:07.491696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:07.491842 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.1M memory peak. May 15 23:51:07.503682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:51:07.523184 systemd[1]: Reload requested from client PID 2115 ('systemctl') (unit session-7.scope)... May 15 23:51:07.523204 systemd[1]: Reloading... May 15 23:51:07.592466 zram_generator::config[2156]: No configuration found. May 15 23:51:07.723904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:51:07.806659 systemd[1]: Reloading finished in 283 ms. May 15 23:51:07.841123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:07.843794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:51:07.844365 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:51:07.844633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:07.844670 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95M memory peak. May 15 23:51:07.846000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:51:07.948732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:07.952207 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:51:07.991547 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:51:07.991547 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:51:07.991547 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:51:07.991865 kubelet[2206]: I0515 23:51:07.991592 2206 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:51:08.562589 kubelet[2206]: I0515 23:51:08.562544 2206 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:51:08.562589 kubelet[2206]: I0515 23:51:08.562575 2206 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:51:08.562955 kubelet[2206]: I0515 23:51:08.562927 2206 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:51:08.604365 kubelet[2206]: E0515 23:51:08.604328 2206 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:08.608000 kubelet[2206]: I0515 23:51:08.607972 2206 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:51:08.613226 kubelet[2206]: E0515 23:51:08.613187 2206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:51:08.613226 kubelet[2206]: I0515 23:51:08.613216 2206 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:51:08.617090 kubelet[2206]: I0515 23:51:08.617061 2206 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:51:08.617859 kubelet[2206]: I0515 23:51:08.617831 2206 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:51:08.618166 kubelet[2206]: I0515 23:51:08.618125 2206 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:51:08.618343 kubelet[2206]: I0515 23:51:08.618162 2206 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:51:08.618496 kubelet[2206]: I0515 23:51:08.618406 2206 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:51:08.618496 kubelet[2206]: I0515 23:51:08.618417 2206 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:51:08.618706 kubelet[2206]: I0515 23:51:08.618689 2206 state_mem.go:36] "Initialized new in-memory state store" May 15 23:51:08.621055 kubelet[2206]: I0515 23:51:08.620771 2206 kubelet.go:408] "Attempting to sync node with API server" May 15 23:51:08.621055 kubelet[2206]: I0515 23:51:08.620801 2206 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:51:08.621055 kubelet[2206]: I0515 23:51:08.620821 2206 kubelet.go:314] "Adding apiserver pod source" May 15 23:51:08.621055 kubelet[2206]: I0515 23:51:08.620902 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:51:08.624403 kubelet[2206]: W0515 23:51:08.624128 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:08.624403 kubelet[2206]: E0515 23:51:08.624197 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:08.624403 kubelet[2206]: W0515 23:51:08.624260 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:08.624403 kubelet[2206]: E0515 23:51:08.624305 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:08.628917 kubelet[2206]: I0515 23:51:08.628892 2206 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:51:08.629900 kubelet[2206]: I0515 23:51:08.629880 2206 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:51:08.630537 kubelet[2206]: W0515 23:51:08.630172 2206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:51:08.633365 kubelet[2206]: I0515 23:51:08.631209 2206 server.go:1274] "Started kubelet" May 15 23:51:08.633365 kubelet[2206]: I0515 23:51:08.631455 2206 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:51:08.633365 kubelet[2206]: I0515 23:51:08.631698 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:51:08.633365 kubelet[2206]: I0515 23:51:08.632001 2206 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:51:08.633365 kubelet[2206]: I0515 23:51:08.632506 2206 server.go:449] "Adding debug handlers to kubelet server" May 15 23:51:08.633814 kubelet[2206]: I0515 23:51:08.633786 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:51:08.634017 kubelet[2206]: I0515 23:51:08.633996 2206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:51:08.634124 kubelet[2206]: I0515 23:51:08.634095 2206 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:51:08.634201 kubelet[2206]: I0515 23:51:08.634185 2206 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:51:08.634247 kubelet[2206]: I0515 23:51:08.634235 2206 reconciler.go:26] "Reconciler: start to sync state" May 15 23:51:08.635604 kubelet[2206]: W0515 23:51:08.635544 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:08.635676 kubelet[2206]: E0515 23:51:08.635613 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:08.635996 kubelet[2206]: E0515 23:51:08.635958 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" May 15 23:51:08.636090 kubelet[2206]: E0515 23:51:08.635969 2206 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:51:08.636598 kubelet[2206]: I0515 23:51:08.636575 2206 factory.go:221] Registration of the systemd container factory successfully May 15 23:51:08.636814 kubelet[2206]: E0515 23:51:08.635546 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd85ba8f35746 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:51:08.631185222 +0000 UTC m=+0.676082689,LastTimestamp:2025-05-15 23:51:08.631185222 +0000 UTC m=+0.676082689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:51:08.636814 kubelet[2206]: I0515 23:51:08.636788 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:51:08.637289 kubelet[2206]: E0515 23:51:08.637260 2206 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:51:08.638173 kubelet[2206]: I0515 23:51:08.638153 2206 factory.go:221] Registration of the containerd container factory successfully May 15 23:51:08.648959 kubelet[2206]: I0515 23:51:08.648922 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:51:08.648959 kubelet[2206]: I0515 23:51:08.648938 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:51:08.648959 kubelet[2206]: I0515 23:51:08.648955 2206 state_mem.go:36] "Initialized new in-memory state store" May 15 23:51:08.649874 kubelet[2206]: I0515 23:51:08.649844 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:51:08.650834 kubelet[2206]: I0515 23:51:08.650812 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:51:08.650857 kubelet[2206]: I0515 23:51:08.650843 2206 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:51:08.650890 kubelet[2206]: I0515 23:51:08.650860 2206 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:51:08.650923 kubelet[2206]: E0515 23:51:08.650907 2206 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:51:08.720037 kubelet[2206]: I0515 23:51:08.719994 2206 policy_none.go:49] "None policy: Start" May 15 23:51:08.720476 kubelet[2206]: W0515 23:51:08.720404 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:08.720510 kubelet[2206]: E0515 23:51:08.720491 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:08.720917 kubelet[2206]: I0515 23:51:08.720889 2206 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:51:08.720945 kubelet[2206]: I0515 23:51:08.720920 2206 state_mem.go:35] "Initializing new in-memory state store" May 15 23:51:08.726978 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:51:08.736585 kubelet[2206]: E0515 23:51:08.736553 2206 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:51:08.741199 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:51:08.752031 kubelet[2206]: E0515 23:51:08.752000 2206 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:51:08.757770 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:51:08.758901 kubelet[2206]: I0515 23:51:08.758737 2206 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:51:08.758967 kubelet[2206]: I0515 23:51:08.758937 2206 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:51:08.758991 kubelet[2206]: I0515 23:51:08.758949 2206 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:51:08.759180 kubelet[2206]: I0515 23:51:08.759160 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:51:08.759974 kubelet[2206]: E0515 23:51:08.759955 2206 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:51:08.837267 kubelet[2206]: E0515 23:51:08.837149 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" May 15 23:51:08.860147 kubelet[2206]: I0515 23:51:08.860113 2206 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:08.860624 kubelet[2206]: E0515 23:51:08.860598 2206 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 15 23:51:08.961765 systemd[1]: Created slice kubepods-burstable-podaf951c6324c483c80428358b5b0ddb96.slice - libcontainer container kubepods-burstable-podaf951c6324c483c80428358b5b0ddb96.slice. May 15 23:51:08.972420 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 15 23:51:08.985922 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 15 23:51:09.035022 kubelet[2206]: I0515 23:51:09.034983 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:09.061772 kubelet[2206]: I0515 23:51:09.061733 2206 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:09.062055 kubelet[2206]: E0515 23:51:09.062031 2206 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 15 23:51:09.135832 kubelet[2206]: I0515 23:51:09.135692 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:09.135832 kubelet[2206]: I0515 23:51:09.135726 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:09.135832 kubelet[2206]: I0515 23:51:09.135750 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:09.135832 kubelet[2206]: I0515 23:51:09.135765 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:09.135832 kubelet[2206]: I0515 23:51:09.135782 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:09.136015 kubelet[2206]: I0515 23:51:09.135820 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:09.136015 kubelet[2206]: I0515 23:51:09.135839 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:51:09.136015 kubelet[2206]: I0515 23:51:09.135854 2206 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:09.238456 kubelet[2206]: E0515 23:51:09.238392 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" May 15 23:51:09.270230 containerd[1483]: time="2025-05-15T23:51:09.270194459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af951c6324c483c80428358b5b0ddb96,Namespace:kube-system,Attempt:0,}" May 15 23:51:09.274713 containerd[1483]: time="2025-05-15T23:51:09.274673894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 15 23:51:09.288597 containerd[1483]: time="2025-05-15T23:51:09.288563595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 15 23:51:09.463872 kubelet[2206]: I0515 23:51:09.463769 2206 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:09.464170 kubelet[2206]: E0515 23:51:09.464128 2206 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 15 23:51:09.580101 kubelet[2206]: W0515 23:51:09.580024 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:09.580101 kubelet[2206]: E0515 23:51:09.580100 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:09.804785 kubelet[2206]: W0515 23:51:09.804596 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:09.804785 kubelet[2206]: E0515 23:51:09.804684 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:09.812331 kubelet[2206]: W0515 23:51:09.812244 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:09.812331 kubelet[2206]: E0515 23:51:09.812307 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:09.856954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661470931.mount: Deactivated successfully. May 15 23:51:09.863146 containerd[1483]: time="2025-05-15T23:51:09.863104238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:51:09.865412 containerd[1483]: time="2025-05-15T23:51:09.865368403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 23:51:09.867419 containerd[1483]: time="2025-05-15T23:51:09.867385379Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:51:09.869178 containerd[1483]: time="2025-05-15T23:51:09.869053503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:51:09.870112 containerd[1483]: time="2025-05-15T23:51:09.870073696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:51:09.870574 containerd[1483]: time="2025-05-15T23:51:09.870451629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:51:09.871753 containerd[1483]: time="2025-05-15T23:51:09.871712978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:51:09.872587 containerd[1483]: time="2025-05-15T23:51:09.872538232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:51:09.875593 containerd[1483]: time="2025-05-15T23:51:09.875537931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.901074ms" May 15 23:51:09.878015 containerd[1483]: time="2025-05-15T23:51:09.877397677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.11917ms" May 15 23:51:09.882548 containerd[1483]: time="2025-05-15T23:51:09.879052939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.316328ms" May 15 23:51:10.030242 containerd[1483]: time="2025-05-15T23:51:10.030114924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:10.030242 containerd[1483]: time="2025-05-15T23:51:10.030204898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:10.030520 containerd[1483]: time="2025-05-15T23:51:10.030219921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.030520 containerd[1483]: time="2025-05-15T23:51:10.030290837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.031408 containerd[1483]: time="2025-05-15T23:51:10.031338009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:10.031589 containerd[1483]: time="2025-05-15T23:51:10.031401615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:10.031589 containerd[1483]: time="2025-05-15T23:51:10.031420353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.031589 containerd[1483]: time="2025-05-15T23:51:10.031497742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.034982 containerd[1483]: time="2025-05-15T23:51:10.033785819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:10.034982 containerd[1483]: time="2025-05-15T23:51:10.033875034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:10.034982 containerd[1483]: time="2025-05-15T23:51:10.033924176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.034982 containerd[1483]: time="2025-05-15T23:51:10.034013472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:10.039889 kubelet[2206]: E0515 23:51:10.039846 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" May 15 23:51:10.056603 systemd[1]: Started cri-containerd-0d977fa8244f070e636169feff9d1be65cd82a5c906e6f4984a5df18906c6364.scope - libcontainer container 0d977fa8244f070e636169feff9d1be65cd82a5c906e6f4984a5df18906c6364. May 15 23:51:10.057730 systemd[1]: Started cri-containerd-0f2c9522aa71963d13ffa29cc70299e659f1cf3df7be6370ed4ab020a1f72c63.scope - libcontainer container 0f2c9522aa71963d13ffa29cc70299e659f1cf3df7be6370ed4ab020a1f72c63. May 15 23:51:10.058756 systemd[1]: Started cri-containerd-adcbae10f4a3519449df85b049494809ed50ac8285b40cb466d708711fa89846.scope - libcontainer container adcbae10f4a3519449df85b049494809ed50ac8285b40cb466d708711fa89846. May 15 23:51:10.088461 containerd[1483]: time="2025-05-15T23:51:10.088410596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d977fa8244f070e636169feff9d1be65cd82a5c906e6f4984a5df18906c6364\"" May 15 23:51:10.093544 containerd[1483]: time="2025-05-15T23:51:10.093413768Z" level=info msg="CreateContainer within sandbox \"0d977fa8244f070e636169feff9d1be65cd82a5c906e6f4984a5df18906c6364\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:51:10.093652 containerd[1483]: time="2025-05-15T23:51:10.093521002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f2c9522aa71963d13ffa29cc70299e659f1cf3df7be6370ed4ab020a1f72c63\"" May 15 23:51:10.096212 containerd[1483]: time="2025-05-15T23:51:10.096103134Z" level=info msg="CreateContainer within sandbox \"0f2c9522aa71963d13ffa29cc70299e659f1cf3df7be6370ed4ab020a1f72c63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:51:10.098323 containerd[1483]: time="2025-05-15T23:51:10.098265518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af951c6324c483c80428358b5b0ddb96,Namespace:kube-system,Attempt:0,} returns sandbox id \"adcbae10f4a3519449df85b049494809ed50ac8285b40cb466d708711fa89846\"" May 15 23:51:10.101240 containerd[1483]: time="2025-05-15T23:51:10.101195802Z" level=info msg="CreateContainer within sandbox \"adcbae10f4a3519449df85b049494809ed50ac8285b40cb466d708711fa89846\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:51:10.108131 containerd[1483]: time="2025-05-15T23:51:10.108091634Z" level=info msg="CreateContainer within sandbox \"0d977fa8244f070e636169feff9d1be65cd82a5c906e6f4984a5df18906c6364\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5af37baa21e991b29ededf6ef62120dbe52ed9999c05d45855eb36f9d90c8619\"" May 15 23:51:10.108764 containerd[1483]: time="2025-05-15T23:51:10.108735200Z" level=info msg="StartContainer for \"5af37baa21e991b29ededf6ef62120dbe52ed9999c05d45855eb36f9d90c8619\"" May 15 23:51:10.112262 containerd[1483]: time="2025-05-15T23:51:10.112222909Z" level=info msg="CreateContainer within sandbox \"0f2c9522aa71963d13ffa29cc70299e659f1cf3df7be6370ed4ab020a1f72c63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"48cc529089ba1ce5c7b35c57f89a15408b5fff8b6b6425152a328e93e39d3e9c\"" May 15 23:51:10.112841 containerd[1483]: time="2025-05-15T23:51:10.112816333Z" level=info msg="StartContainer for \"48cc529089ba1ce5c7b35c57f89a15408b5fff8b6b6425152a328e93e39d3e9c\"" May 15 23:51:10.119155 containerd[1483]: time="2025-05-15T23:51:10.119108754Z" level=info msg="CreateContainer within sandbox \"adcbae10f4a3519449df85b049494809ed50ac8285b40cb466d708711fa89846\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b54d6b1ff084ed5f9ed66fe6c61371b098daa98e650226018d157facaa75bb2d\"" May 15 23:51:10.119664 containerd[1483]: time="2025-05-15T23:51:10.119614001Z" level=info msg="StartContainer for \"b54d6b1ff084ed5f9ed66fe6c61371b098daa98e650226018d157facaa75bb2d\"" May 15 23:51:10.135607 systemd[1]: Started cri-containerd-5af37baa21e991b29ededf6ef62120dbe52ed9999c05d45855eb36f9d90c8619.scope - libcontainer container 5af37baa21e991b29ededf6ef62120dbe52ed9999c05d45855eb36f9d90c8619. May 15 23:51:10.139081 systemd[1]: Started cri-containerd-48cc529089ba1ce5c7b35c57f89a15408b5fff8b6b6425152a328e93e39d3e9c.scope - libcontainer container 48cc529089ba1ce5c7b35c57f89a15408b5fff8b6b6425152a328e93e39d3e9c. May 15 23:51:10.142842 systemd[1]: Started cri-containerd-b54d6b1ff084ed5f9ed66fe6c61371b098daa98e650226018d157facaa75bb2d.scope - libcontainer container b54d6b1ff084ed5f9ed66fe6c61371b098daa98e650226018d157facaa75bb2d. May 15 23:51:10.171957 containerd[1483]: time="2025-05-15T23:51:10.171808149Z" level=info msg="StartContainer for \"48cc529089ba1ce5c7b35c57f89a15408b5fff8b6b6425152a328e93e39d3e9c\" returns successfully" May 15 23:51:10.178104 containerd[1483]: time="2025-05-15T23:51:10.178051587Z" level=info msg="StartContainer for \"5af37baa21e991b29ededf6ef62120dbe52ed9999c05d45855eb36f9d90c8619\" returns successfully" May 15 23:51:10.187828 kubelet[2206]: W0515 23:51:10.187776 2206 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 15 23:51:10.187967 kubelet[2206]: E0515 23:51:10.187839 2206 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 15 23:51:10.195261 containerd[1483]: time="2025-05-15T23:51:10.195150733Z" level=info msg="StartContainer for \"b54d6b1ff084ed5f9ed66fe6c61371b098daa98e650226018d157facaa75bb2d\" returns successfully" May 15 23:51:10.266617 kubelet[2206]: I0515 23:51:10.266513 2206 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:10.267177 kubelet[2206]: E0515 23:51:10.267151 2206 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 15 23:51:11.868787 kubelet[2206]: I0515 23:51:11.868738 2206 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:12.430286 kubelet[2206]: E0515 23:51:12.430240 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:51:12.510179 kubelet[2206]: I0515 23:51:12.510021 2206 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:51:12.510179 kubelet[2206]: E0515 23:51:12.510059 2206 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:51:12.622312 kubelet[2206]: I0515 23:51:12.622277 2206 apiserver.go:52] "Watching apiserver" May 15 23:51:12.634711 kubelet[2206]: I0515 23:51:12.634674 2206 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:51:14.408360 systemd[1]: Reload requested from client PID 2484 ('systemctl') (unit session-7.scope)... May 15 23:51:14.408375 systemd[1]: Reloading... May 15 23:51:14.465529 zram_generator::config[2528]: No configuration found. May 15 23:51:14.582785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:51:14.680774 systemd[1]: Reloading finished in 272 ms. May 15 23:51:14.702167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:51:14.716707 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:51:14.717067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:14.717184 systemd[1]: kubelet.service: Consumed 1.066s CPU time, 130.3M memory peak. May 15 23:51:14.729685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:51:14.844796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:51:14.849411 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:51:14.884328 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:51:14.884328 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:51:14.884328 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:51:14.884700 kubelet[2570]: I0515 23:51:14.884375 2570 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:51:14.891686 kubelet[2570]: I0515 23:51:14.891641 2570 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:51:14.891686 kubelet[2570]: I0515 23:51:14.891672 2570 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:51:14.891930 kubelet[2570]: I0515 23:51:14.891900 2570 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:51:14.893850 kubelet[2570]: I0515 23:51:14.893811 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:51:14.896003 kubelet[2570]: I0515 23:51:14.895981 2570 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:51:14.898911 kubelet[2570]: E0515 23:51:14.898831 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:51:14.898911 kubelet[2570]: I0515 23:51:14.898866 2570 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:51:14.902895 kubelet[2570]: I0515 23:51:14.901852 2570 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:51:14.902895 kubelet[2570]: I0515 23:51:14.902001 2570 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:51:14.902895 kubelet[2570]: I0515 23:51:14.902095 2570 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:51:14.902895 kubelet[2570]: I0515 23:51:14.902116 2570 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902391 2570 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902401 2570 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902485 2570 state_mem.go:36] "Initialized new in-memory state store" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902616 2570 kubelet.go:408] "Attempting to sync node with API server" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902636 2570 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902655 2570 kubelet.go:314] "Adding apiserver pod source" May 15 23:51:14.903127 kubelet[2570]: I0515 23:51:14.902667 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:51:14.903950 kubelet[2570]: I0515 23:51:14.903921 2570 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:51:14.904392 kubelet[2570]: I0515 23:51:14.904373 2570 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:51:14.904833 kubelet[2570]: I0515 23:51:14.904794 2570 server.go:1274] "Started kubelet" May 15 23:51:14.905155 kubelet[2570]: I0515 23:51:14.905129 2570 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:51:14.905254 kubelet[2570]: I0515 23:51:14.905181 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:51:14.905509 kubelet[2570]: I0515 23:51:14.905487 2570 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:51:14.906129 kubelet[2570]: I0515 23:51:14.906111 2570 server.go:449] "Adding debug handlers to kubelet server" May 15 23:51:14.912975 kubelet[2570]: I0515 23:51:14.912948 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:51:14.916989 kubelet[2570]: I0515 23:51:14.916963 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:51:14.918014 kubelet[2570]: E0515 23:51:14.917031 2570 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:51:14.918394 kubelet[2570]: E0515 23:51:14.918345 2570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:51:14.918394 kubelet[2570]: I0515 23:51:14.918398 2570 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:51:14.918551 kubelet[2570]: I0515 23:51:14.918539 2570 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:51:14.920254 kubelet[2570]: I0515 23:51:14.920168 2570 reconciler.go:26] "Reconciler: start to sync state" May 15 23:51:14.921483 kubelet[2570]: I0515 23:51:14.920478 2570 factory.go:221] Registration of the systemd container factory successfully May 15 23:51:14.921483 kubelet[2570]: I0515 23:51:14.920574 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:51:14.923600 kubelet[2570]: I0515 23:51:14.923529 2570 factory.go:221] Registration of the containerd container factory successfully May 15 23:51:14.925839 kubelet[2570]: I0515 23:51:14.925769 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:51:14.927010 kubelet[2570]: I0515 23:51:14.926976 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:51:14.927010 kubelet[2570]: I0515 23:51:14.927003 2570 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:51:14.927107 kubelet[2570]: I0515 23:51:14.927020 2570 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:51:14.927107 kubelet[2570]: E0515 23:51:14.927062 2570 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:51:14.955965 kubelet[2570]: I0515 23:51:14.955873 2570 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:51:14.956513 kubelet[2570]: I0515 23:51:14.956096 2570 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:51:14.956513 kubelet[2570]: I0515 23:51:14.956122 2570 state_mem.go:36] "Initialized new in-memory state store" May 15 23:51:14.956513 kubelet[2570]: I0515 23:51:14.956272 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:51:14.956513 kubelet[2570]: I0515 23:51:14.956284 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:51:14.956513 kubelet[2570]: I0515 23:51:14.956302 2570 policy_none.go:49] "None policy: Start" May 15 23:51:14.957666 kubelet[2570]: I0515 23:51:14.957650 2570 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:51:14.958272 kubelet[2570]: I0515 23:51:14.957789 2570 state_mem.go:35] "Initializing new in-memory state store" May 15 23:51:14.958272 kubelet[2570]: I0515 23:51:14.957939 2570 state_mem.go:75] "Updated machine memory state" May 15 23:51:14.961813 kubelet[2570]: I0515 23:51:14.961786 2570 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:51:14.962159 kubelet[2570]: I0515 23:51:14.961977 2570 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:51:14.962159 kubelet[2570]: I0515 23:51:14.962006 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:51:14.962250 kubelet[2570]: I0515 23:51:14.962183 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:51:15.033460 kubelet[2570]: E0515 23:51:15.033386 2570 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.063819 kubelet[2570]: I0515 23:51:15.063785 2570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:51:15.071187 kubelet[2570]: I0515 23:51:15.071154 2570 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 23:51:15.071287 kubelet[2570]: I0515 23:51:15.071227 2570 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:51:15.121371 kubelet[2570]: I0515 23:51:15.121337 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:15.121657 kubelet[2570]: I0515 23:51:15.121476 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:15.121657 kubelet[2570]: I0515 23:51:15.121504 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.121657 kubelet[2570]: I0515 23:51:15.121542 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.121657 kubelet[2570]: I0515 23:51:15.121560 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.121919 kubelet[2570]: I0515 23:51:15.121796 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:51:15.121919 kubelet[2570]: I0515 23:51:15.121839 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af951c6324c483c80428358b5b0ddb96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af951c6324c483c80428358b5b0ddb96\") " pod="kube-system/kube-apiserver-localhost" May 15 23:51:15.121919 kubelet[2570]: I0515 23:51:15.121859 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.121919 kubelet[2570]: I0515 23:51:15.121879 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:51:15.413908 sudo[2609]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:51:15.414187 sudo[2609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:51:15.845872 sudo[2609]: pam_unix(sudo:session): session closed for user root May 15 23:51:15.903684 kubelet[2570]: I0515 23:51:15.903640 2570 apiserver.go:52] "Watching apiserver" May 15 23:51:15.919563 kubelet[2570]: I0515 23:51:15.919510 2570 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:51:15.949309 kubelet[2570]: E0515 23:51:15.949259 2570 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:51:15.969139 kubelet[2570]: I0515 23:51:15.969066 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.969050028 podStartE2EDuration="2.969050028s" podCreationTimestamp="2025-05-15 23:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:15.963033285 +0000 UTC m=+1.110634740" watchObservedRunningTime="2025-05-15 23:51:15.969050028 +0000 UTC m=+1.116651483" May 15 23:51:15.975988 kubelet[2570]: I0515 23:51:15.975940 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.975918579 podStartE2EDuration="975.918579ms" podCreationTimestamp="2025-05-15 23:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:15.969780988 +0000 UTC m=+1.117382443" watchObservedRunningTime="2025-05-15 23:51:15.975918579 +0000 UTC m=+1.123519994" May 15 23:51:15.984665 kubelet[2570]: I0515 23:51:15.984616 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.984591165 podStartE2EDuration="984.591165ms" podCreationTimestamp="2025-05-15 23:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:15.976419478 +0000 UTC m=+1.124020932" watchObservedRunningTime="2025-05-15 23:51:15.984591165 +0000 UTC m=+1.132192620" May 15 23:51:17.659795 sudo[1671]: pam_unix(sudo:session): session closed for user root May 15 23:51:17.660949 sshd[1670]: Connection closed by 10.0.0.1 port 43728 May 15 23:51:17.661512 sshd-session[1667]: pam_unix(sshd:session): session closed for user core May 15 23:51:17.665236 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:43728.service: Deactivated successfully. May 15 23:51:17.667211 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:51:17.667407 systemd[1]: session-7.scope: Consumed 7.508s CPU time, 264.8M memory peak. May 15 23:51:17.668347 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. May 15 23:51:17.669164 systemd-logind[1468]: Removed session 7. May 15 23:51:21.412666 kubelet[2570]: I0515 23:51:21.412621 2570 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:51:21.413162 containerd[1483]: time="2025-05-15T23:51:21.413092697Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:51:21.413401 kubelet[2570]: I0515 23:51:21.413354 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:51:22.114140 kubelet[2570]: W0515 23:51:22.112458 2570 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 23:51:22.114140 kubelet[2570]: E0515 23:51:22.112511 2570 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 23:51:22.125635 systemd[1]: Created slice kubepods-besteffort-pod79234700_df9e_4a5a_93fe_6cd4a7c19afd.slice - libcontainer container kubepods-besteffort-pod79234700_df9e_4a5a_93fe_6cd4a7c19afd.slice. May 15 23:51:22.149902 systemd[1]: Created slice kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice - libcontainer container kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice. May 15 23:51:22.166845 kubelet[2570]: I0515 23:51:22.166795 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79234700-df9e-4a5a-93fe-6cd4a7c19afd-kube-proxy\") pod \"kube-proxy-24zm8\" (UID: \"79234700-df9e-4a5a-93fe-6cd4a7c19afd\") " pod="kube-system/kube-proxy-24zm8" May 15 23:51:22.166845 kubelet[2570]: I0515 23:51:22.166839 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hostproc\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167002 kubelet[2570]: I0515 23:51:22.166866 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa78e680-5fc0-4af7-879e-c6a15b20cc91-clustermesh-secrets\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167002 kubelet[2570]: I0515 23:51:22.166883 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv9wh\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167002 kubelet[2570]: I0515 23:51:22.166903 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79234700-df9e-4a5a-93fe-6cd4a7c19afd-lib-modules\") pod \"kube-proxy-24zm8\" (UID: \"79234700-df9e-4a5a-93fe-6cd4a7c19afd\") " pod="kube-system/kube-proxy-24zm8" May 15 23:51:22.167002 kubelet[2570]: I0515 23:51:22.166917 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-net\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167002 kubelet[2570]: I0515 23:51:22.166932 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-lib-modules\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.166949 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-config-path\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.166971 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cni-path\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.166986 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-xtables-lock\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.167004 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-cgroup\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.167019 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-etc-cni-netd\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167121 kubelet[2570]: I0515 23:51:22.167063 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hubble-tls\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167258 kubelet[2570]: I0515 23:51:22.167121 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79234700-df9e-4a5a-93fe-6cd4a7c19afd-xtables-lock\") pod \"kube-proxy-24zm8\" (UID: \"79234700-df9e-4a5a-93fe-6cd4a7c19afd\") " pod="kube-system/kube-proxy-24zm8" May 15 23:51:22.167258 kubelet[2570]: I0515 23:51:22.167145 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmph\" (UniqueName: \"kubernetes.io/projected/79234700-df9e-4a5a-93fe-6cd4a7c19afd-kube-api-access-mcmph\") pod \"kube-proxy-24zm8\" (UID: \"79234700-df9e-4a5a-93fe-6cd4a7c19afd\") " pod="kube-system/kube-proxy-24zm8" May 15 23:51:22.167258 kubelet[2570]: I0515 23:51:22.167172 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-run\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167258 kubelet[2570]: I0515 23:51:22.167188 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-bpf-maps\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.167258 kubelet[2570]: I0515 23:51:22.167203 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-kernel\") pod \"cilium-bst2r\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " pod="kube-system/cilium-bst2r" May 15 23:51:22.466345 systemd[1]: Created slice kubepods-besteffort-pod400c2edf_1b93_4924_897c_78ad1195c06b.slice - libcontainer container kubepods-besteffort-pod400c2edf_1b93_4924_897c_78ad1195c06b.slice. May 15 23:51:22.469426 kubelet[2570]: I0515 23:51:22.469094 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctwl\" (UniqueName: \"kubernetes.io/projected/400c2edf-1b93-4924-897c-78ad1195c06b-kube-api-access-kctwl\") pod \"cilium-operator-5d85765b45-mqk2j\" (UID: \"400c2edf-1b93-4924-897c-78ad1195c06b\") " pod="kube-system/cilium-operator-5d85765b45-mqk2j" May 15 23:51:22.469426 kubelet[2570]: I0515 23:51:22.469168 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400c2edf-1b93-4924-897c-78ad1195c06b-cilium-config-path\") pod \"cilium-operator-5d85765b45-mqk2j\" (UID: \"400c2edf-1b93-4924-897c-78ad1195c06b\") " pod="kube-system/cilium-operator-5d85765b45-mqk2j" May 15 23:51:23.277373 kubelet[2570]: E0515 23:51:23.277335 2570 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.277373 kubelet[2570]: E0515 23:51:23.277369 2570 projected.go:194] Error preparing data for projected volume kube-api-access-mcmph for pod kube-system/kube-proxy-24zm8: failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.277560 kubelet[2570]: E0515 23:51:23.277479 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/79234700-df9e-4a5a-93fe-6cd4a7c19afd-kube-api-access-mcmph podName:79234700-df9e-4a5a-93fe-6cd4a7c19afd nodeName:}" failed. No retries permitted until 2025-05-15 23:51:23.777415286 +0000 UTC m=+8.925016741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mcmph" (UniqueName: "kubernetes.io/projected/79234700-df9e-4a5a-93fe-6cd4a7c19afd-kube-api-access-mcmph") pod "kube-proxy-24zm8" (UID: "79234700-df9e-4a5a-93fe-6cd4a7c19afd") : failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.279595 kubelet[2570]: E0515 23:51:23.279501 2570 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.279595 kubelet[2570]: E0515 23:51:23.279529 2570 projected.go:194] Error preparing data for projected volume kube-api-access-mv9wh for pod kube-system/cilium-bst2r: failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.279595 kubelet[2570]: E0515 23:51:23.279574 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh podName:aa78e680-5fc0-4af7-879e-c6a15b20cc91 nodeName:}" failed. No retries permitted until 2025-05-15 23:51:23.779560991 +0000 UTC m=+8.927162406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mv9wh" (UniqueName: "kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh") pod "cilium-bst2r" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91") : failed to sync configmap cache: timed out waiting for the condition May 15 23:51:23.370421 containerd[1483]: time="2025-05-15T23:51:23.370368128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mqk2j,Uid:400c2edf-1b93-4924-897c-78ad1195c06b,Namespace:kube-system,Attempt:0,}" May 15 23:51:23.393969 containerd[1483]: time="2025-05-15T23:51:23.393767153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:23.393969 containerd[1483]: time="2025-05-15T23:51:23.393830118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:23.393969 containerd[1483]: time="2025-05-15T23:51:23.393855160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.394273 containerd[1483]: time="2025-05-15T23:51:23.393961449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.416607 systemd[1]: Started cri-containerd-4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f.scope - libcontainer container 4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f. May 15 23:51:23.442138 containerd[1483]: time="2025-05-15T23:51:23.442077973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mqk2j,Uid:400c2edf-1b93-4924-897c-78ad1195c06b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\"" May 15 23:51:23.444779 containerd[1483]: time="2025-05-15T23:51:23.444749124Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:51:23.940768 containerd[1483]: time="2025-05-15T23:51:23.940713115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24zm8,Uid:79234700-df9e-4a5a-93fe-6cd4a7c19afd,Namespace:kube-system,Attempt:0,}" May 15 23:51:23.955460 containerd[1483]: time="2025-05-15T23:51:23.955236451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bst2r,Uid:aa78e680-5fc0-4af7-879e-c6a15b20cc91,Namespace:kube-system,Attempt:0,}" May 15 23:51:23.960409 containerd[1483]: time="2025-05-15T23:51:23.959348807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:23.960409 containerd[1483]: time="2025-05-15T23:51:23.959397371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:23.960409 containerd[1483]: time="2025-05-15T23:51:23.959407492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.960409 containerd[1483]: time="2025-05-15T23:51:23.959494580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.974944 containerd[1483]: time="2025-05-15T23:51:23.974835947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:23.974944 containerd[1483]: time="2025-05-15T23:51:23.974890672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:23.976498 containerd[1483]: time="2025-05-15T23:51:23.975492844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.976498 containerd[1483]: time="2025-05-15T23:51:23.975791590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:23.988606 systemd[1]: Started cri-containerd-cc59fc4668d52920d21fa7c8cf25acc2cde374a3ba3e3aa4c1fba9e1f52b208c.scope - libcontainer container cc59fc4668d52920d21fa7c8cf25acc2cde374a3ba3e3aa4c1fba9e1f52b208c. May 15 23:51:23.992811 systemd[1]: Started cri-containerd-93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551.scope - libcontainer container 93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551. May 15 23:51:24.014938 containerd[1483]: time="2025-05-15T23:51:24.014680974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24zm8,Uid:79234700-df9e-4a5a-93fe-6cd4a7c19afd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc59fc4668d52920d21fa7c8cf25acc2cde374a3ba3e3aa4c1fba9e1f52b208c\"" May 15 23:51:24.015829 containerd[1483]: time="2025-05-15T23:51:24.015777904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bst2r,Uid:aa78e680-5fc0-4af7-879e-c6a15b20cc91,Namespace:kube-system,Attempt:0,} returns sandbox id \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\"" May 15 23:51:24.017870 containerd[1483]: time="2025-05-15T23:51:24.017736865Z" level=info msg="CreateContainer within sandbox \"cc59fc4668d52920d21fa7c8cf25acc2cde374a3ba3e3aa4c1fba9e1f52b208c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:51:24.034644 containerd[1483]: time="2025-05-15T23:51:24.034606047Z" level=info msg="CreateContainer within sandbox \"cc59fc4668d52920d21fa7c8cf25acc2cde374a3ba3e3aa4c1fba9e1f52b208c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2594c8f2b827a598b7ab01ff07809e62880471d05b01ede7745e0f5741533c70\"" May 15 23:51:24.035853 containerd[1483]: time="2025-05-15T23:51:24.035265701Z" level=info msg="StartContainer for \"2594c8f2b827a598b7ab01ff07809e62880471d05b01ede7745e0f5741533c70\"" May 15 23:51:24.058605 systemd[1]: Started cri-containerd-2594c8f2b827a598b7ab01ff07809e62880471d05b01ede7745e0f5741533c70.scope - libcontainer container 2594c8f2b827a598b7ab01ff07809e62880471d05b01ede7745e0f5741533c70. May 15 23:51:24.082771 containerd[1483]: time="2025-05-15T23:51:24.082666426Z" level=info msg="StartContainer for \"2594c8f2b827a598b7ab01ff07809e62880471d05b01ede7745e0f5741533c70\" returns successfully" May 15 23:51:24.520367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437955354.mount: Deactivated successfully. May 15 23:51:24.970173 kubelet[2570]: I0515 23:51:24.970023 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-24zm8" podStartSLOduration=2.970006547 podStartE2EDuration="2.970006547s" podCreationTimestamp="2025-05-15 23:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:24.969055669 +0000 UTC m=+10.116657124" watchObservedRunningTime="2025-05-15 23:51:24.970006547 +0000 UTC m=+10.117608002" May 15 23:51:26.105157 containerd[1483]: time="2025-05-15T23:51:26.104667312Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:26.106188 containerd[1483]: time="2025-05-15T23:51:26.106164263Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 23:51:26.107081 containerd[1483]: time="2025-05-15T23:51:26.107019166Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:26.108432 containerd[1483]: time="2025-05-15T23:51:26.108238215Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.663455249s" May 15 23:51:26.108432 containerd[1483]: time="2025-05-15T23:51:26.108279098Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 23:51:26.115600 containerd[1483]: time="2025-05-15T23:51:26.113884231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:51:26.115600 containerd[1483]: time="2025-05-15T23:51:26.114997873Z" level=info msg="CreateContainer within sandbox \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:51:26.135302 containerd[1483]: time="2025-05-15T23:51:26.135254205Z" level=info msg="CreateContainer within sandbox \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\"" May 15 23:51:26.135725 containerd[1483]: time="2025-05-15T23:51:26.135701678Z" level=info msg="StartContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\"" May 15 23:51:26.163599 systemd[1]: Started cri-containerd-48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e.scope - libcontainer container 48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e. May 15 23:51:26.186263 containerd[1483]: time="2025-05-15T23:51:26.186137233Z" level=info msg="StartContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" returns successfully" May 15 23:51:26.988220 kubelet[2570]: I0515 23:51:26.988158 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mqk2j" podStartSLOduration=2.319896978 podStartE2EDuration="4.988135708s" podCreationTimestamp="2025-05-15 23:51:22 +0000 UTC" firstStartedPulling="2025-05-15 23:51:23.444087706 +0000 UTC m=+8.591689161" lastFinishedPulling="2025-05-15 23:51:26.112326436 +0000 UTC m=+11.259927891" observedRunningTime="2025-05-15 23:51:26.984717257 +0000 UTC m=+12.132318712" watchObservedRunningTime="2025-05-15 23:51:26.988135708 +0000 UTC m=+12.135737123" May 15 23:51:30.090011 update_engine[1472]: I20250515 23:51:30.089351 1472 update_attempter.cc:509] Updating boot flags... May 15 23:51:30.132501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3008) May 15 23:51:30.203627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3010) May 15 23:51:30.246491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3010) May 15 23:51:30.920060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312285922.mount: Deactivated successfully. May 15 23:51:37.380760 containerd[1483]: time="2025-05-15T23:51:37.380689527Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:37.381338 containerd[1483]: time="2025-05-15T23:51:37.381300713Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 23:51:37.382027 containerd[1483]: time="2025-05-15T23:51:37.381969582Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:51:37.383784 containerd[1483]: time="2025-05-15T23:51:37.383653414Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.26972082s" May 15 23:51:37.383784 containerd[1483]: time="2025-05-15T23:51:37.383700096Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 23:51:37.390307 containerd[1483]: time="2025-05-15T23:51:37.390263859Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:51:37.418006 containerd[1483]: time="2025-05-15T23:51:37.417947971Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\"" May 15 23:51:37.418526 containerd[1483]: time="2025-05-15T23:51:37.418477834Z" level=info msg="StartContainer for \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\"" May 15 23:51:37.453645 systemd[1]: Started cri-containerd-9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300.scope - libcontainer container 9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300. May 15 23:51:37.475880 containerd[1483]: time="2025-05-15T23:51:37.474848381Z" level=info msg="StartContainer for \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\" returns successfully" May 15 23:51:37.541297 systemd[1]: cri-containerd-9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300.scope: Deactivated successfully. May 15 23:51:37.541676 systemd[1]: cri-containerd-9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300.scope: Consumed 70ms CPU time, 8.8M memory peak, 3.1M written to disk. May 15 23:51:37.730977 containerd[1483]: time="2025-05-15T23:51:37.725306605Z" level=info msg="shim disconnected" id=9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300 namespace=k8s.io May 15 23:51:37.730977 containerd[1483]: time="2025-05-15T23:51:37.730897566Z" level=warning msg="cleaning up after shim disconnected" id=9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300 namespace=k8s.io May 15 23:51:37.730977 containerd[1483]: time="2025-05-15T23:51:37.730911806Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:37.989991 containerd[1483]: time="2025-05-15T23:51:37.989816394Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:51:38.007827 containerd[1483]: time="2025-05-15T23:51:38.007721033Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\"" May 15 23:51:38.012866 containerd[1483]: time="2025-05-15T23:51:38.010319660Z" level=info msg="StartContainer for \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\"" May 15 23:51:38.049626 systemd[1]: Started cri-containerd-4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9.scope - libcontainer container 4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9. May 15 23:51:38.069398 containerd[1483]: time="2025-05-15T23:51:38.069360453Z" level=info msg="StartContainer for \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\" returns successfully" May 15 23:51:38.105733 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:51:38.105945 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:51:38.106191 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:51:38.115825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:51:38.116810 systemd[1]: cri-containerd-4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9.scope: Deactivated successfully. May 15 23:51:38.128543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:51:38.152068 containerd[1483]: time="2025-05-15T23:51:38.151997538Z" level=info msg="shim disconnected" id=4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9 namespace=k8s.io May 15 23:51:38.152068 containerd[1483]: time="2025-05-15T23:51:38.152063541Z" level=warning msg="cleaning up after shim disconnected" id=4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9 namespace=k8s.io May 15 23:51:38.152068 containerd[1483]: time="2025-05-15T23:51:38.152076381Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:38.412888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300-rootfs.mount: Deactivated successfully. May 15 23:51:38.992663 containerd[1483]: time="2025-05-15T23:51:38.992611577Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:51:39.076945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364358542.mount: Deactivated successfully. May 15 23:51:39.078716 containerd[1483]: time="2025-05-15T23:51:39.078668749Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\"" May 15 23:51:39.079338 containerd[1483]: time="2025-05-15T23:51:39.079204051Z" level=info msg="StartContainer for \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\"" May 15 23:51:39.108615 systemd[1]: Started cri-containerd-e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a.scope - libcontainer container e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a. May 15 23:51:39.133917 containerd[1483]: time="2025-05-15T23:51:39.133810646Z" level=info msg="StartContainer for \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\" returns successfully" May 15 23:51:39.154796 systemd[1]: cri-containerd-e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a.scope: Deactivated successfully. May 15 23:51:39.176189 containerd[1483]: time="2025-05-15T23:51:39.176117796Z" level=info msg="shim disconnected" id=e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a namespace=k8s.io May 15 23:51:39.176189 containerd[1483]: time="2025-05-15T23:51:39.176187719Z" level=warning msg="cleaning up after shim disconnected" id=e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a namespace=k8s.io May 15 23:51:39.176189 containerd[1483]: time="2025-05-15T23:51:39.176196799Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:39.412471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a-rootfs.mount: Deactivated successfully. May 15 23:51:39.996545 containerd[1483]: time="2025-05-15T23:51:39.996510379Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:51:40.010235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717193084.mount: Deactivated successfully. May 15 23:51:40.014525 containerd[1483]: time="2025-05-15T23:51:40.014478787Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\"" May 15 23:51:40.014993 containerd[1483]: time="2025-05-15T23:51:40.014972565Z" level=info msg="StartContainer for \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\"" May 15 23:51:40.040622 systemd[1]: Started cri-containerd-f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342.scope - libcontainer container f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342. May 15 23:51:40.058464 systemd[1]: cri-containerd-f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342.scope: Deactivated successfully. May 15 23:51:40.059684 containerd[1483]: time="2025-05-15T23:51:40.059585294Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice/cri-containerd-f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342.scope/memory.events\": no such file or directory" May 15 23:51:40.061936 containerd[1483]: time="2025-05-15T23:51:40.061892341Z" level=info msg="StartContainer for \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\" returns successfully" May 15 23:51:40.085336 containerd[1483]: time="2025-05-15T23:51:40.085136661Z" level=info msg="shim disconnected" id=f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342 namespace=k8s.io May 15 23:51:40.085336 containerd[1483]: time="2025-05-15T23:51:40.085196263Z" level=warning msg="cleaning up after shim disconnected" id=f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342 namespace=k8s.io May 15 23:51:40.085336 containerd[1483]: time="2025-05-15T23:51:40.085204663Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:51:40.412504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342-rootfs.mount: Deactivated successfully. May 15 23:51:41.000536 containerd[1483]: time="2025-05-15T23:51:41.000071408Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:51:41.023250 containerd[1483]: time="2025-05-15T23:51:41.023207770Z" level=info msg="CreateContainer within sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\"" May 15 23:51:41.023948 containerd[1483]: time="2025-05-15T23:51:41.023922836Z" level=info msg="StartContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\"" May 15 23:51:41.070650 systemd[1]: Started cri-containerd-488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f.scope - libcontainer container 488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f. May 15 23:51:41.125276 containerd[1483]: time="2025-05-15T23:51:41.125226915Z" level=info msg="StartContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" returns successfully" May 15 23:51:41.319459 kubelet[2570]: I0515 23:51:41.318220 2570 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 23:51:41.366323 systemd[1]: Created slice kubepods-burstable-podca458401_b6a3_41c6_a143_e77eae440d54.slice - libcontainer container kubepods-burstable-podca458401_b6a3_41c6_a143_e77eae440d54.slice. May 15 23:51:41.376721 systemd[1]: Created slice kubepods-burstable-pod54aba514_b816_4567_9178_e18e1b995ba6.slice - libcontainer container kubepods-burstable-pod54aba514_b816_4567_9178_e18e1b995ba6.slice. May 15 23:51:41.500383 kubelet[2570]: I0515 23:51:41.497979 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j99z\" (UniqueName: \"kubernetes.io/projected/54aba514-b816-4567-9178-e18e1b995ba6-kube-api-access-7j99z\") pod \"coredns-7c65d6cfc9-k6prr\" (UID: \"54aba514-b816-4567-9178-e18e1b995ba6\") " pod="kube-system/coredns-7c65d6cfc9-k6prr" May 15 23:51:41.500383 kubelet[2570]: I0515 23:51:41.498032 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54aba514-b816-4567-9178-e18e1b995ba6-config-volume\") pod \"coredns-7c65d6cfc9-k6prr\" (UID: \"54aba514-b816-4567-9178-e18e1b995ba6\") " pod="kube-system/coredns-7c65d6cfc9-k6prr" May 15 23:51:41.500383 kubelet[2570]: I0515 23:51:41.498053 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca458401-b6a3-41c6-a143-e77eae440d54-config-volume\") pod \"coredns-7c65d6cfc9-jlbq7\" (UID: \"ca458401-b6a3-41c6-a143-e77eae440d54\") " pod="kube-system/coredns-7c65d6cfc9-jlbq7" May 15 23:51:41.500383 kubelet[2570]: I0515 23:51:41.498070 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9t8q\" (UniqueName: \"kubernetes.io/projected/ca458401-b6a3-41c6-a143-e77eae440d54-kube-api-access-f9t8q\") pod \"coredns-7c65d6cfc9-jlbq7\" (UID: \"ca458401-b6a3-41c6-a143-e77eae440d54\") " pod="kube-system/coredns-7c65d6cfc9-jlbq7" May 15 23:51:41.676384 containerd[1483]: time="2025-05-15T23:51:41.676273890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jlbq7,Uid:ca458401-b6a3-41c6-a143-e77eae440d54,Namespace:kube-system,Attempt:0,}" May 15 23:51:41.680372 containerd[1483]: time="2025-05-15T23:51:41.680329398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k6prr,Uid:54aba514-b816-4567-9178-e18e1b995ba6,Namespace:kube-system,Attempt:0,}" May 15 23:51:41.807770 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:40050.service - OpenSSH per-connection server daemon (10.0.0.1:40050). May 15 23:51:41.852844 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 40050 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:41.854265 sshd-session[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:41.859211 systemd-logind[1468]: New session 8 of user core. May 15 23:51:41.866618 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:51:42.002538 sshd[3436]: Connection closed by 10.0.0.1 port 40050 May 15 23:51:42.004620 sshd-session[3399]: pam_unix(sshd:session): session closed for user core May 15 23:51:42.011581 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:40050.service: Deactivated successfully. May 15 23:51:42.013124 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:51:42.015992 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. May 15 23:51:42.017073 systemd-logind[1468]: Removed session 8. May 15 23:51:42.022243 kubelet[2570]: I0515 23:51:42.021544 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bst2r" podStartSLOduration=6.6534163920000005 podStartE2EDuration="20.021529682s" podCreationTimestamp="2025-05-15 23:51:22 +0000 UTC" firstStartedPulling="2025-05-15 23:51:24.017579812 +0000 UTC m=+9.165181267" lastFinishedPulling="2025-05-15 23:51:37.385693102 +0000 UTC m=+22.533294557" observedRunningTime="2025-05-15 23:51:42.021171389 +0000 UTC m=+27.168772844" watchObservedRunningTime="2025-05-15 23:51:42.021529682 +0000 UTC m=+27.169131097" May 15 23:51:43.477857 systemd-networkd[1410]: cilium_host: Link UP May 15 23:51:43.477969 systemd-networkd[1410]: cilium_net: Link UP May 15 23:51:43.477974 systemd-networkd[1410]: cilium_net: Gained carrier May 15 23:51:43.478845 systemd-networkd[1410]: cilium_host: Gained carrier May 15 23:51:43.566620 systemd-networkd[1410]: cilium_vxlan: Link UP May 15 23:51:43.566628 systemd-networkd[1410]: cilium_vxlan: Gained carrier May 15 23:51:43.870473 kernel: NET: Registered PF_ALG protocol family May 15 23:51:44.064591 systemd-networkd[1410]: cilium_host: Gained IPv6LL May 15 23:51:44.192600 systemd-networkd[1410]: cilium_net: Gained IPv6LL May 15 23:51:44.448422 systemd-networkd[1410]: lxc_health: Link UP May 15 23:51:44.450607 systemd-networkd[1410]: lxc_health: Gained carrier May 15 23:51:44.833488 kernel: eth0: renamed from tmp1ff5a May 15 23:51:44.839127 systemd-networkd[1410]: lxc01495f95004f: Link UP May 15 23:51:44.839540 systemd-networkd[1410]: lxc123191467ce3: Link UP May 15 23:51:44.840744 systemd-networkd[1410]: lxc01495f95004f: Gained carrier May 15 23:51:44.850470 kernel: eth0: renamed from tmpdcd98 May 15 23:51:44.853987 systemd-networkd[1410]: lxc123191467ce3: Gained carrier May 15 23:51:45.535578 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL May 15 23:51:45.599597 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 15 23:51:46.623563 systemd-networkd[1410]: lxc123191467ce3: Gained IPv6LL May 15 23:51:46.879599 systemd-networkd[1410]: lxc01495f95004f: Gained IPv6LL May 15 23:51:47.029736 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). May 15 23:51:47.070355 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:47.071731 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:47.078338 systemd-logind[1468]: New session 9 of user core. May 15 23:51:47.087806 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:51:47.218730 sshd[3828]: Connection closed by 10.0.0.1 port 53088 May 15 23:51:47.218566 sshd-session[3825]: pam_unix(sshd:session): session closed for user core May 15 23:51:47.221325 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:53088.service: Deactivated successfully. May 15 23:51:47.225259 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:51:47.226790 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. May 15 23:51:47.227997 systemd-logind[1468]: Removed session 9. May 15 23:51:48.375950 containerd[1483]: time="2025-05-15T23:51:48.375144538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:48.375950 containerd[1483]: time="2025-05-15T23:51:48.375194699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:48.375950 containerd[1483]: time="2025-05-15T23:51:48.375208819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:48.375950 containerd[1483]: time="2025-05-15T23:51:48.375281262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:48.377564 containerd[1483]: time="2025-05-15T23:51:48.377316119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:51:48.377564 containerd[1483]: time="2025-05-15T23:51:48.377382160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:51:48.377564 containerd[1483]: time="2025-05-15T23:51:48.377428842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:48.377564 containerd[1483]: time="2025-05-15T23:51:48.377526564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:51:48.401625 systemd[1]: Started cri-containerd-1ff5a981e7274ed87e3dde165abde2c1aa1ae1310e5b6e21077ebd59e6821625.scope - libcontainer container 1ff5a981e7274ed87e3dde165abde2c1aa1ae1310e5b6e21077ebd59e6821625. May 15 23:51:48.402744 systemd[1]: Started cri-containerd-dcd98cccb5a06f0454b7d025ed4a80673cdfe3bd61fe950d7d3afd12c0e3d20b.scope - libcontainer container dcd98cccb5a06f0454b7d025ed4a80673cdfe3bd61fe950d7d3afd12c0e3d20b. May 15 23:51:48.414781 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:51:48.415255 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:51:48.434523 containerd[1483]: time="2025-05-15T23:51:48.434474679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jlbq7,Uid:ca458401-b6a3-41c6-a143-e77eae440d54,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcd98cccb5a06f0454b7d025ed4a80673cdfe3bd61fe950d7d3afd12c0e3d20b\"" May 15 23:51:48.434653 containerd[1483]: time="2025-05-15T23:51:48.434556362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k6prr,Uid:54aba514-b816-4567-9178-e18e1b995ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ff5a981e7274ed87e3dde165abde2c1aa1ae1310e5b6e21077ebd59e6821625\"" May 15 23:51:48.437321 containerd[1483]: time="2025-05-15T23:51:48.437280038Z" level=info msg="CreateContainer within sandbox \"1ff5a981e7274ed87e3dde165abde2c1aa1ae1310e5b6e21077ebd59e6821625\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:51:48.438836 containerd[1483]: time="2025-05-15T23:51:48.438743639Z" level=info msg="CreateContainer within sandbox \"dcd98cccb5a06f0454b7d025ed4a80673cdfe3bd61fe950d7d3afd12c0e3d20b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:51:48.452583 containerd[1483]: time="2025-05-15T23:51:48.452515825Z" level=info msg="CreateContainer within sandbox \"dcd98cccb5a06f0454b7d025ed4a80673cdfe3bd61fe950d7d3afd12c0e3d20b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"274a95daf09fc768abf6e046d4b5ba15769379bc4596391cbe01b93d8dd59a77\"" May 15 23:51:48.453454 containerd[1483]: time="2025-05-15T23:51:48.453156643Z" level=info msg="StartContainer for \"274a95daf09fc768abf6e046d4b5ba15769379bc4596391cbe01b93d8dd59a77\"" May 15 23:51:48.455636 containerd[1483]: time="2025-05-15T23:51:48.455378385Z" level=info msg="CreateContainer within sandbox \"1ff5a981e7274ed87e3dde165abde2c1aa1ae1310e5b6e21077ebd59e6821625\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e3545ef7ee50b5fe339c4c4f1bbbe5b8ea3989348a6c43af195a59b5c885791\"" May 15 23:51:48.455814 containerd[1483]: time="2025-05-15T23:51:48.455791597Z" level=info msg="StartContainer for \"3e3545ef7ee50b5fe339c4c4f1bbbe5b8ea3989348a6c43af195a59b5c885791\"" May 15 23:51:48.488651 systemd[1]: Started cri-containerd-274a95daf09fc768abf6e046d4b5ba15769379bc4596391cbe01b93d8dd59a77.scope - libcontainer container 274a95daf09fc768abf6e046d4b5ba15769379bc4596391cbe01b93d8dd59a77. May 15 23:51:48.491596 systemd[1]: Started cri-containerd-3e3545ef7ee50b5fe339c4c4f1bbbe5b8ea3989348a6c43af195a59b5c885791.scope - libcontainer container 3e3545ef7ee50b5fe339c4c4f1bbbe5b8ea3989348a6c43af195a59b5c885791. May 15 23:51:48.525094 containerd[1483]: time="2025-05-15T23:51:48.525051216Z" level=info msg="StartContainer for \"3e3545ef7ee50b5fe339c4c4f1bbbe5b8ea3989348a6c43af195a59b5c885791\" returns successfully" May 15 23:51:48.525094 containerd[1483]: time="2025-05-15T23:51:48.525085657Z" level=info msg="StartContainer for \"274a95daf09fc768abf6e046d4b5ba15769379bc4596391cbe01b93d8dd59a77\" returns successfully" May 15 23:51:49.031307 kubelet[2570]: I0515 23:51:49.031242 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-k6prr" podStartSLOduration=27.031228487 podStartE2EDuration="27.031228487s" podCreationTimestamp="2025-05-15 23:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:49.031001881 +0000 UTC m=+34.178603336" watchObservedRunningTime="2025-05-15 23:51:49.031228487 +0000 UTC m=+34.178829942" May 15 23:51:49.054240 kubelet[2570]: I0515 23:51:49.053993 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jlbq7" podStartSLOduration=27.053975464 podStartE2EDuration="27.053975464s" podCreationTimestamp="2025-05-15 23:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:51:49.040327214 +0000 UTC m=+34.187928709" watchObservedRunningTime="2025-05-15 23:51:49.053975464 +0000 UTC m=+34.201576919" May 15 23:51:49.382365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258198439.mount: Deactivated successfully. May 15 23:51:52.230421 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:53094.service - OpenSSH per-connection server daemon (10.0.0.1:53094). May 15 23:51:52.283740 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 53094 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:52.286857 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:52.294947 systemd-logind[1468]: New session 10 of user core. May 15 23:51:52.305600 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:51:52.429055 sshd[4015]: Connection closed by 10.0.0.1 port 53094 May 15 23:51:52.429667 sshd-session[4013]: pam_unix(sshd:session): session closed for user core May 15 23:51:52.432901 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:53094.service: Deactivated successfully. May 15 23:51:52.435292 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:51:52.436228 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. May 15 23:51:52.437160 systemd-logind[1468]: Removed session 10. May 15 23:51:57.443751 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:44516.service - OpenSSH per-connection server daemon (10.0.0.1:44516). May 15 23:51:57.486450 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 44516 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:57.487754 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:57.492494 systemd-logind[1468]: New session 11 of user core. May 15 23:51:57.503692 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:51:57.617750 sshd[4036]: Connection closed by 10.0.0.1 port 44516 May 15 23:51:57.618104 sshd-session[4034]: pam_unix(sshd:session): session closed for user core May 15 23:51:57.628359 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:44516.service: Deactivated successfully. May 15 23:51:57.631026 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:51:57.633047 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. May 15 23:51:57.649937 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). May 15 23:51:57.651284 systemd-logind[1468]: Removed session 11. May 15 23:51:57.700684 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:57.705786 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:57.710522 systemd-logind[1468]: New session 12 of user core. May 15 23:51:57.728670 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:51:57.889564 sshd[4052]: Connection closed by 10.0.0.1 port 44524 May 15 23:51:57.890100 sshd-session[4049]: pam_unix(sshd:session): session closed for user core May 15 23:51:57.903347 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:44524.service: Deactivated successfully. May 15 23:51:57.908657 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:51:57.911599 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. May 15 23:51:57.916985 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:44526.service - OpenSSH per-connection server daemon (10.0.0.1:44526). May 15 23:51:57.918237 systemd-logind[1468]: Removed session 12. May 15 23:51:57.963857 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 44526 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:51:57.966041 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:51:57.974447 systemd-logind[1468]: New session 13 of user core. May 15 23:51:57.984720 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:51:58.108957 sshd[4066]: Connection closed by 10.0.0.1 port 44526 May 15 23:51:58.109498 sshd-session[4063]: pam_unix(sshd:session): session closed for user core May 15 23:51:58.113174 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:44526.service: Deactivated successfully. May 15 23:51:58.115258 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:51:58.116198 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. May 15 23:51:58.117029 systemd-logind[1468]: Removed session 13. May 15 23:52:03.128378 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:49824.service - OpenSSH per-connection server daemon (10.0.0.1:49824). May 15 23:52:03.171421 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 49824 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:03.173256 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:03.177901 systemd-logind[1468]: New session 14 of user core. May 15 23:52:03.183682 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:52:03.297477 sshd[4081]: Connection closed by 10.0.0.1 port 49824 May 15 23:52:03.297428 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 15 23:52:03.301315 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:49824.service: Deactivated successfully. May 15 23:52:03.303362 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:52:03.304104 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. May 15 23:52:03.305165 systemd-logind[1468]: Removed session 14. May 15 23:52:08.309870 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:49826.service - OpenSSH per-connection server daemon (10.0.0.1:49826). May 15 23:52:08.355456 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 49826 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:08.357043 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:08.361476 systemd-logind[1468]: New session 15 of user core. May 15 23:52:08.375666 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:52:08.487155 sshd[4096]: Connection closed by 10.0.0.1 port 49826 May 15 23:52:08.487429 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 15 23:52:08.497813 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:49826.service: Deactivated successfully. May 15 23:52:08.500384 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:52:08.502840 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. May 15 23:52:08.508688 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:49836.service - OpenSSH per-connection server daemon (10.0.0.1:49836). May 15 23:52:08.509969 systemd-logind[1468]: Removed session 15. May 15 23:52:08.545739 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 49836 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:08.546994 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:08.551494 systemd-logind[1468]: New session 16 of user core. May 15 23:52:08.555613 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:52:08.761157 sshd[4111]: Connection closed by 10.0.0.1 port 49836 May 15 23:52:08.762003 sshd-session[4108]: pam_unix(sshd:session): session closed for user core May 15 23:52:08.769994 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:49836.service: Deactivated successfully. May 15 23:52:08.771785 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:52:08.772520 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. May 15 23:52:08.781724 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:49846.service - OpenSSH per-connection server daemon (10.0.0.1:49846). May 15 23:52:08.782757 systemd-logind[1468]: Removed session 16. May 15 23:52:08.827144 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 49846 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:08.828423 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:08.832539 systemd-logind[1468]: New session 17 of user core. May 15 23:52:08.841582 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:52:10.169963 sshd[4124]: Connection closed by 10.0.0.1 port 49846 May 15 23:52:10.168624 sshd-session[4121]: pam_unix(sshd:session): session closed for user core May 15 23:52:10.189623 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:49856.service - OpenSSH per-connection server daemon (10.0.0.1:49856). May 15 23:52:10.191084 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:49846.service: Deactivated successfully. May 15 23:52:10.194858 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:52:10.196282 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. May 15 23:52:10.197356 systemd-logind[1468]: Removed session 17. May 15 23:52:10.227903 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:10.229236 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:10.233694 systemd-logind[1468]: New session 18 of user core. May 15 23:52:10.239598 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:52:10.456017 sshd[4145]: Connection closed by 10.0.0.1 port 49856 May 15 23:52:10.456519 sshd-session[4140]: pam_unix(sshd:session): session closed for user core May 15 23:52:10.469238 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:49856.service: Deactivated successfully. May 15 23:52:10.470929 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:52:10.474350 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. May 15 23:52:10.484860 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:49864.service - OpenSSH per-connection server daemon (10.0.0.1:49864). May 15 23:52:10.485906 systemd-logind[1468]: Removed session 18. May 15 23:52:10.522423 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 49864 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:10.523680 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:10.527507 systemd-logind[1468]: New session 19 of user core. May 15 23:52:10.542641 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:52:10.649456 sshd[4159]: Connection closed by 10.0.0.1 port 49864 May 15 23:52:10.649850 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 15 23:52:10.653004 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:49864.service: Deactivated successfully. May 15 23:52:10.655731 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:52:10.656930 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. May 15 23:52:10.658268 systemd-logind[1468]: Removed session 19. May 15 23:52:15.662936 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:59020.service - OpenSSH per-connection server daemon (10.0.0.1:59020). May 15 23:52:15.708118 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 59020 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:15.708646 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:15.714397 systemd-logind[1468]: New session 20 of user core. May 15 23:52:15.721668 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:52:15.836073 sshd[4179]: Connection closed by 10.0.0.1 port 59020 May 15 23:52:15.836418 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 15 23:52:15.839762 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:59020.service: Deactivated successfully. May 15 23:52:15.843275 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:52:15.844221 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. May 15 23:52:15.845047 systemd-logind[1468]: Removed session 20. May 15 23:52:20.868059 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:59026.service - OpenSSH per-connection server daemon (10.0.0.1:59026). May 15 23:52:20.909117 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 59026 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:20.910846 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:20.915100 systemd-logind[1468]: New session 21 of user core. May 15 23:52:20.926664 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:52:21.038332 sshd[4195]: Connection closed by 10.0.0.1 port 59026 May 15 23:52:21.038380 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 15 23:52:21.042553 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:59026.service: Deactivated successfully. May 15 23:52:21.045101 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:52:21.047253 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. May 15 23:52:21.048299 systemd-logind[1468]: Removed session 21. May 15 23:52:26.053107 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:48286.service - OpenSSH per-connection server daemon (10.0.0.1:48286). May 15 23:52:26.106018 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 48286 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:26.107498 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:26.114721 systemd-logind[1468]: New session 22 of user core. May 15 23:52:26.123730 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:52:26.260795 sshd[4213]: Connection closed by 10.0.0.1 port 48286 May 15 23:52:26.261642 sshd-session[4211]: pam_unix(sshd:session): session closed for user core May 15 23:52:26.276897 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:48286.service: Deactivated successfully. May 15 23:52:26.279042 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:52:26.282292 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. May 15 23:52:26.298000 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:48294.service - OpenSSH per-connection server daemon (10.0.0.1:48294). May 15 23:52:26.302913 systemd-logind[1468]: Removed session 22. May 15 23:52:26.350918 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 48294 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:26.350586 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:26.357223 systemd-logind[1468]: New session 23 of user core. May 15 23:52:26.370689 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 23:52:28.602143 containerd[1483]: time="2025-05-15T23:52:28.601997730Z" level=info msg="StopContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" with timeout 30 (s)" May 15 23:52:28.602546 containerd[1483]: time="2025-05-15T23:52:28.602384571Z" level=info msg="Stop container \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" with signal terminated" May 15 23:52:28.615669 systemd[1]: cri-containerd-48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e.scope: Deactivated successfully. May 15 23:52:28.639975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e-rootfs.mount: Deactivated successfully. May 15 23:52:28.644760 containerd[1483]: time="2025-05-15T23:52:28.644619105Z" level=info msg="StopContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" with timeout 2 (s)" May 15 23:52:28.645103 containerd[1483]: time="2025-05-15T23:52:28.645030706Z" level=info msg="Stop container \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" with signal terminated" May 15 23:52:28.647539 containerd[1483]: time="2025-05-15T23:52:28.647481269Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:52:28.650479 containerd[1483]: time="2025-05-15T23:52:28.650393713Z" level=info msg="shim disconnected" id=48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e namespace=k8s.io May 15 23:52:28.650681 containerd[1483]: time="2025-05-15T23:52:28.650542753Z" level=warning msg="cleaning up after shim disconnected" id=48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e namespace=k8s.io May 15 23:52:28.650681 containerd[1483]: time="2025-05-15T23:52:28.650554033Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:28.650906 systemd-networkd[1410]: lxc_health: Link DOWN May 15 23:52:28.650911 systemd-networkd[1410]: lxc_health: Lost carrier May 15 23:52:28.662497 systemd[1]: cri-containerd-488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f.scope: Deactivated successfully. May 15 23:52:28.663081 systemd[1]: cri-containerd-488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f.scope: Consumed 6.624s CPU time, 122.3M memory peak, 144K read from disk, 12.9M written to disk. May 15 23:52:28.685545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f-rootfs.mount: Deactivated successfully. May 15 23:52:28.690389 containerd[1483]: time="2025-05-15T23:52:28.690318725Z" level=info msg="shim disconnected" id=488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f namespace=k8s.io May 15 23:52:28.690389 containerd[1483]: time="2025-05-15T23:52:28.690385485Z" level=warning msg="cleaning up after shim disconnected" id=488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f namespace=k8s.io May 15 23:52:28.690389 containerd[1483]: time="2025-05-15T23:52:28.690393925Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:28.703701 containerd[1483]: time="2025-05-15T23:52:28.703659422Z" level=info msg="StopContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" returns successfully" May 15 23:52:28.704025 containerd[1483]: time="2025-05-15T23:52:28.703736342Z" level=info msg="StopContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" returns successfully" May 15 23:52:28.706788 containerd[1483]: time="2025-05-15T23:52:28.706699386Z" level=info msg="StopPodSandbox for \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\"" May 15 23:52:28.707057 containerd[1483]: time="2025-05-15T23:52:28.707004106Z" level=info msg="StopPodSandbox for \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\"" May 15 23:52:28.708135 containerd[1483]: time="2025-05-15T23:52:28.707994707Z" level=info msg="Container to stop \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.708135 containerd[1483]: time="2025-05-15T23:52:28.708016027Z" level=info msg="Container to stop \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.708135 containerd[1483]: time="2025-05-15T23:52:28.708025427Z" level=info msg="Container to stop \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.708135 containerd[1483]: time="2025-05-15T23:52:28.708034067Z" level=info msg="Container to stop \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.708135 containerd[1483]: time="2025-05-15T23:52:28.708042427Z" level=info msg="Container to stop \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.710432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551-shm.mount: Deactivated successfully. May 15 23:52:28.711945 containerd[1483]: time="2025-05-15T23:52:28.711895832Z" level=info msg="Container to stop \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:52:28.714614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f-shm.mount: Deactivated successfully. May 15 23:52:28.715965 systemd[1]: cri-containerd-93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551.scope: Deactivated successfully. May 15 23:52:28.721008 systemd[1]: cri-containerd-4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f.scope: Deactivated successfully. May 15 23:52:28.742654 containerd[1483]: time="2025-05-15T23:52:28.742586192Z" level=info msg="shim disconnected" id=93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551 namespace=k8s.io May 15 23:52:28.742654 containerd[1483]: time="2025-05-15T23:52:28.742645392Z" level=warning msg="cleaning up after shim disconnected" id=93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551 namespace=k8s.io May 15 23:52:28.742654 containerd[1483]: time="2025-05-15T23:52:28.742654112Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:28.748396 containerd[1483]: time="2025-05-15T23:52:28.748338480Z" level=info msg="shim disconnected" id=4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f namespace=k8s.io May 15 23:52:28.748396 containerd[1483]: time="2025-05-15T23:52:28.748391600Z" level=warning msg="cleaning up after shim disconnected" id=4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f namespace=k8s.io May 15 23:52:28.748396 containerd[1483]: time="2025-05-15T23:52:28.748401920Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:28.760822 containerd[1483]: time="2025-05-15T23:52:28.760760016Z" level=info msg="TearDown network for sandbox \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" successfully" May 15 23:52:28.760822 containerd[1483]: time="2025-05-15T23:52:28.760794176Z" level=info msg="StopPodSandbox for \"93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551\" returns successfully" May 15 23:52:28.768705 containerd[1483]: time="2025-05-15T23:52:28.768667426Z" level=info msg="TearDown network for sandbox \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\" successfully" May 15 23:52:28.768705 containerd[1483]: time="2025-05-15T23:52:28.768698346Z" level=info msg="StopPodSandbox for \"4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f\" returns successfully" May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.784921 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hostproc\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.784958 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-net\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.784974 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-kernel\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.784990 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-xtables-lock\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.785010 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hubble-tls\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.785654 kubelet[2570]: I0515 23:52:28.785028 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-config-path\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785042 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cni-path\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785055 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-run\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785079 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kctwl\" (UniqueName: \"kubernetes.io/projected/400c2edf-1b93-4924-897c-78ad1195c06b-kube-api-access-kctwl\") pod \"400c2edf-1b93-4924-897c-78ad1195c06b\" (UID: \"400c2edf-1b93-4924-897c-78ad1195c06b\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785097 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv9wh\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785111 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-lib-modules\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786128 kubelet[2570]: I0515 23:52:28.785129 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa78e680-5fc0-4af7-879e-c6a15b20cc91-clustermesh-secrets\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786250 kubelet[2570]: I0515 23:52:28.785145 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400c2edf-1b93-4924-897c-78ad1195c06b-cilium-config-path\") pod \"400c2edf-1b93-4924-897c-78ad1195c06b\" (UID: \"400c2edf-1b93-4924-897c-78ad1195c06b\") " May 15 23:52:28.786250 kubelet[2570]: I0515 23:52:28.785159 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-etc-cni-netd\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786250 kubelet[2570]: I0515 23:52:28.785174 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-cgroup\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.786250 kubelet[2570]: I0515 23:52:28.785188 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-bpf-maps\") pod \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\" (UID: \"aa78e680-5fc0-4af7-879e-c6a15b20cc91\") " May 15 23:52:28.789489 kubelet[2570]: I0515 23:52:28.789219 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.789489 kubelet[2570]: I0515 23:52:28.789251 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.789489 kubelet[2570]: I0515 23:52:28.789223 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.790244 kubelet[2570]: I0515 23:52:28.789941 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.790244 kubelet[2570]: I0515 23:52:28.789995 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.790244 kubelet[2570]: I0515 23:52:28.790070 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.790244 kubelet[2570]: I0515 23:52:28.790094 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.793408 kubelet[2570]: I0515 23:52:28.793372 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400c2edf-1b93-4924-897c-78ad1195c06b-kube-api-access-kctwl" (OuterVolumeSpecName: "kube-api-access-kctwl") pod "400c2edf-1b93-4924-897c-78ad1195c06b" (UID: "400c2edf-1b93-4924-897c-78ad1195c06b"). InnerVolumeSpecName "kube-api-access-kctwl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:52:28.793578 kubelet[2570]: I0515 23:52:28.793496 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.793625 kubelet[2570]: I0515 23:52:28.793590 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.793625 kubelet[2570]: I0515 23:52:28.793609 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:52:28.793836 kubelet[2570]: I0515 23:52:28.793796 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:52:28.794461 kubelet[2570]: I0515 23:52:28.794396 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh" (OuterVolumeSpecName: "kube-api-access-mv9wh") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "kube-api-access-mv9wh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:52:28.796514 kubelet[2570]: I0515 23:52:28.796478 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:52:28.797468 kubelet[2570]: I0515 23:52:28.796589 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/400c2edf-1b93-4924-897c-78ad1195c06b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "400c2edf-1b93-4924-897c-78ad1195c06b" (UID: "400c2edf-1b93-4924-897c-78ad1195c06b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:52:28.799429 kubelet[2570]: I0515 23:52:28.799374 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa78e680-5fc0-4af7-879e-c6a15b20cc91-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa78e680-5fc0-4af7-879e-c6a15b20cc91" (UID: "aa78e680-5fc0-4af7-879e-c6a15b20cc91"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885544 2570 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885581 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885591 2570 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885602 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/400c2edf-1b93-4924-897c-78ad1195c06b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885614 2570 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885622 2570 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885629 2570 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885670 kubelet[2570]: I0515 23:52:28.885637 2570 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885645 2570 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885652 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885661 2570 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885670 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885678 2570 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kctwl\" (UniqueName: \"kubernetes.io/projected/400c2edf-1b93-4924-897c-78ad1195c06b-kube-api-access-kctwl\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885687 2570 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv9wh\" (UniqueName: \"kubernetes.io/projected/aa78e680-5fc0-4af7-879e-c6a15b20cc91-kube-api-access-mv9wh\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885694 2570 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa78e680-5fc0-4af7-879e-c6a15b20cc91-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.885929 kubelet[2570]: I0515 23:52:28.885702 2570 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa78e680-5fc0-4af7-879e-c6a15b20cc91-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 23:52:28.934508 systemd[1]: Removed slice kubepods-besteffort-pod400c2edf_1b93_4924_897c_78ad1195c06b.slice - libcontainer container kubepods-besteffort-pod400c2edf_1b93_4924_897c_78ad1195c06b.slice. May 15 23:52:28.935662 systemd[1]: Removed slice kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice - libcontainer container kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice. May 15 23:52:28.935872 systemd[1]: kubepods-burstable-podaa78e680_5fc0_4af7_879e_c6a15b20cc91.slice: Consumed 6.792s CPU time, 122.6M memory peak, 152K read from disk, 16.1M written to disk. May 15 23:52:29.106937 kubelet[2570]: I0515 23:52:29.106909 2570 scope.go:117] "RemoveContainer" containerID="48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e" May 15 23:52:29.109231 containerd[1483]: time="2025-05-15T23:52:29.109203622Z" level=info msg="RemoveContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\"" May 15 23:52:29.130145 containerd[1483]: time="2025-05-15T23:52:29.130107416Z" level=info msg="RemoveContainer for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" returns successfully" May 15 23:52:29.130575 kubelet[2570]: I0515 23:52:29.130552 2570 scope.go:117] "RemoveContainer" containerID="48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e" May 15 23:52:29.131785 containerd[1483]: time="2025-05-15T23:52:29.131655139Z" level=error msg="ContainerStatus for \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\": not found" May 15 23:52:29.143223 kubelet[2570]: E0515 23:52:29.143003 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\": not found" containerID="48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e" May 15 23:52:29.143223 kubelet[2570]: I0515 23:52:29.143048 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e"} err="failed to get container status \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\": rpc error: code = NotFound desc = an error occurred when try to find container \"48f003653ca8c5af64b8920a7da38999a023d3a3b9f043a6030dff8e1d71b49e\": not found" May 15 23:52:29.143223 kubelet[2570]: I0515 23:52:29.143142 2570 scope.go:117] "RemoveContainer" containerID="488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f" May 15 23:52:29.145088 containerd[1483]: time="2025-05-15T23:52:29.145025041Z" level=info msg="RemoveContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\"" May 15 23:52:29.147417 containerd[1483]: time="2025-05-15T23:52:29.147382164Z" level=info msg="RemoveContainer for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" returns successfully" May 15 23:52:29.147639 kubelet[2570]: I0515 23:52:29.147588 2570 scope.go:117] "RemoveContainer" containerID="f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342" May 15 23:52:29.148685 containerd[1483]: time="2025-05-15T23:52:29.148660807Z" level=info msg="RemoveContainer for \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\"" May 15 23:52:29.151798 containerd[1483]: time="2025-05-15T23:52:29.151762652Z" level=info msg="RemoveContainer for \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\" returns successfully" May 15 23:52:29.152288 kubelet[2570]: I0515 23:52:29.152240 2570 scope.go:117] "RemoveContainer" containerID="e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a" May 15 23:52:29.153506 containerd[1483]: time="2025-05-15T23:52:29.153481014Z" level=info msg="RemoveContainer for \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\"" May 15 23:52:29.155530 containerd[1483]: time="2025-05-15T23:52:29.155497818Z" level=info msg="RemoveContainer for \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\" returns successfully" May 15 23:52:29.155757 kubelet[2570]: I0515 23:52:29.155676 2570 scope.go:117] "RemoveContainer" containerID="4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9" May 15 23:52:29.156680 containerd[1483]: time="2025-05-15T23:52:29.156644020Z" level=info msg="RemoveContainer for \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\"" May 15 23:52:29.158718 containerd[1483]: time="2025-05-15T23:52:29.158685663Z" level=info msg="RemoveContainer for \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\" returns successfully" May 15 23:52:29.158882 kubelet[2570]: I0515 23:52:29.158833 2570 scope.go:117] "RemoveContainer" containerID="9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300" May 15 23:52:29.159894 containerd[1483]: time="2025-05-15T23:52:29.159852225Z" level=info msg="RemoveContainer for \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\"" May 15 23:52:29.161919 containerd[1483]: time="2025-05-15T23:52:29.161884788Z" level=info msg="RemoveContainer for \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\" returns successfully" May 15 23:52:29.162150 kubelet[2570]: I0515 23:52:29.162071 2570 scope.go:117] "RemoveContainer" containerID="488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f" May 15 23:52:29.162280 containerd[1483]: time="2025-05-15T23:52:29.162237309Z" level=error msg="ContainerStatus for \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\": not found" May 15 23:52:29.162465 kubelet[2570]: E0515 23:52:29.162421 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\": not found" containerID="488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f" May 15 23:52:29.162653 kubelet[2570]: I0515 23:52:29.162552 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f"} err="failed to get container status \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\": rpc error: code = NotFound desc = an error occurred when try to find container \"488d7b0d90f766d2640cdea60bb9539e8812b6e014a8ae73294362ad4884e35f\": not found" May 15 23:52:29.162653 kubelet[2570]: I0515 23:52:29.162578 2570 scope.go:117] "RemoveContainer" containerID="f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342" May 15 23:52:29.162794 containerd[1483]: time="2025-05-15T23:52:29.162738949Z" level=error msg="ContainerStatus for \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\": not found" May 15 23:52:29.163041 kubelet[2570]: E0515 23:52:29.162922 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\": not found" containerID="f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342" May 15 23:52:29.163041 kubelet[2570]: I0515 23:52:29.162944 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342"} err="failed to get container status \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7d49ceec1a2915515fc9bf387ca5afa78c83349de13ed8c83971cba7c03f342\": not found" May 15 23:52:29.163041 kubelet[2570]: I0515 23:52:29.162988 2570 scope.go:117] "RemoveContainer" containerID="e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a" May 15 23:52:29.163360 containerd[1483]: time="2025-05-15T23:52:29.163331310Z" level=error msg="ContainerStatus for \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\": not found" May 15 23:52:29.163554 kubelet[2570]: E0515 23:52:29.163486 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\": not found" containerID="e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a" May 15 23:52:29.163688 kubelet[2570]: I0515 23:52:29.163556 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a"} err="failed to get container status \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7d71155dd01caf00638508676a664bee9037afdb6fe850e3ed1180803eb204a\": not found" May 15 23:52:29.163688 kubelet[2570]: I0515 23:52:29.163574 2570 scope.go:117] "RemoveContainer" containerID="4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9" May 15 23:52:29.163743 containerd[1483]: time="2025-05-15T23:52:29.163714671Z" level=error msg="ContainerStatus for \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\": not found" May 15 23:52:29.163995 kubelet[2570]: E0515 23:52:29.163816 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\": not found" containerID="4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9" May 15 23:52:29.163995 kubelet[2570]: I0515 23:52:29.163850 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9"} err="failed to get container status \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4235476d46cdb292632d04fb2a884ab0a19b352bab4ae0190308e304a19f07b9\": not found" May 15 23:52:29.163995 kubelet[2570]: I0515 23:52:29.163882 2570 scope.go:117] "RemoveContainer" containerID="9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300" May 15 23:52:29.164129 containerd[1483]: time="2025-05-15T23:52:29.164099592Z" level=error msg="ContainerStatus for \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\": not found" May 15 23:52:29.164217 kubelet[2570]: E0515 23:52:29.164200 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\": not found" containerID="9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300" May 15 23:52:29.164257 kubelet[2570]: I0515 23:52:29.164222 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300"} err="failed to get container status \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e6efebe4071d9c35a878b502068f15ecf949154037cb1b792231b368f9a2300\": not found" May 15 23:52:29.624806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93761ce16782cf53574aa79b00a8dc939d3caac1c467c64f2bc10dd968e4a551-rootfs.mount: Deactivated successfully. May 15 23:52:29.624917 systemd[1]: var-lib-kubelet-pods-aa78e680\x2d5fc0\x2d4af7\x2d879e\x2dc6a15b20cc91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmv9wh.mount: Deactivated successfully. May 15 23:52:29.624982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c9bdf900563de2d45a7c33e2d730fb3a8c2edb09883fbd53979c69c94dd8b9f-rootfs.mount: Deactivated successfully. May 15 23:52:29.625032 systemd[1]: var-lib-kubelet-pods-400c2edf\x2d1b93\x2d4924\x2d897c\x2d78ad1195c06b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkctwl.mount: Deactivated successfully. May 15 23:52:29.625092 systemd[1]: var-lib-kubelet-pods-aa78e680\x2d5fc0\x2d4af7\x2d879e\x2dc6a15b20cc91-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:52:29.625153 systemd[1]: var-lib-kubelet-pods-aa78e680\x2d5fc0\x2d4af7\x2d879e\x2dc6a15b20cc91-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:52:29.979688 kubelet[2570]: E0515 23:52:29.979578 2570 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:52:30.566649 sshd[4228]: Connection closed by 10.0.0.1 port 48294 May 15 23:52:30.566571 sshd-session[4225]: pam_unix(sshd:session): session closed for user core May 15 23:52:30.573902 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:48294.service: Deactivated successfully. May 15 23:52:30.575412 systemd[1]: session-23.scope: Deactivated successfully. May 15 23:52:30.575645 systemd[1]: session-23.scope: Consumed 1.539s CPU time, 28.9M memory peak. May 15 23:52:30.576063 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. May 15 23:52:30.591739 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:48296.service - OpenSSH per-connection server daemon (10.0.0.1:48296). May 15 23:52:30.592759 systemd-logind[1468]: Removed session 23. May 15 23:52:30.628228 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 48296 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:30.629476 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:30.633607 systemd-logind[1468]: New session 24 of user core. May 15 23:52:30.643689 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 23:52:30.931783 kubelet[2570]: I0515 23:52:30.930154 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400c2edf-1b93-4924-897c-78ad1195c06b" path="/var/lib/kubelet/pods/400c2edf-1b93-4924-897c-78ad1195c06b/volumes" May 15 23:52:30.931783 kubelet[2570]: I0515 23:52:30.930568 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" path="/var/lib/kubelet/pods/aa78e680-5fc0-4af7-879e-c6a15b20cc91/volumes" May 15 23:52:31.882320 sshd[4387]: Connection closed by 10.0.0.1 port 48296 May 15 23:52:31.884015 sshd-session[4384]: pam_unix(sshd:session): session closed for user core May 15 23:52:31.896396 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:48296.service: Deactivated successfully. May 15 23:52:31.898261 kubelet[2570]: E0515 23:52:31.898229 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="400c2edf-1b93-4924-897c-78ad1195c06b" containerName="cilium-operator" May 15 23:52:31.898261 kubelet[2570]: E0515 23:52:31.898257 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="mount-bpf-fs" May 15 23:52:31.898261 kubelet[2570]: E0515 23:52:31.898265 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="clean-cilium-state" May 15 23:52:31.898800 kubelet[2570]: E0515 23:52:31.898271 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="mount-cgroup" May 15 23:52:31.898800 kubelet[2570]: E0515 23:52:31.898288 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="apply-sysctl-overwrites" May 15 23:52:31.898800 kubelet[2570]: E0515 23:52:31.898294 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="cilium-agent" May 15 23:52:31.898800 kubelet[2570]: I0515 23:52:31.898317 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="400c2edf-1b93-4924-897c-78ad1195c06b" containerName="cilium-operator" May 15 23:52:31.898800 kubelet[2570]: I0515 23:52:31.898323 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa78e680-5fc0-4af7-879e-c6a15b20cc91" containerName="cilium-agent" May 15 23:52:31.900381 systemd[1]: session-24.scope: Deactivated successfully. May 15 23:52:31.900615 systemd[1]: session-24.scope: Consumed 1.145s CPU time, 24.4M memory peak. May 15 23:52:31.903152 kubelet[2570]: I0515 23:52:31.902824 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqnq\" (UniqueName: \"kubernetes.io/projected/563b0b64-e37e-418e-a9f5-005ffaad0dc2-kube-api-access-kbqnq\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903152 kubelet[2570]: I0515 23:52:31.902874 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-lib-modules\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903152 kubelet[2570]: I0515 23:52:31.902893 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/563b0b64-e37e-418e-a9f5-005ffaad0dc2-clustermesh-secrets\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903152 kubelet[2570]: I0515 23:52:31.902909 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/563b0b64-e37e-418e-a9f5-005ffaad0dc2-cilium-config-path\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903152 kubelet[2570]: I0515 23:52:31.902926 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-host-proc-sys-kernel\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.902944 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-hostproc\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.902959 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-host-proc-sys-net\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.902974 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-cilium-cgroup\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.902988 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-xtables-lock\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.903003 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-cilium-run\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903359 kubelet[2570]: I0515 23:52:31.903019 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-bpf-maps\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.904006 kubelet[2570]: I0515 23:52:31.903032 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-etc-cni-netd\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.904006 kubelet[2570]: I0515 23:52:31.903045 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/563b0b64-e37e-418e-a9f5-005ffaad0dc2-cilium-ipsec-secrets\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.904006 kubelet[2570]: I0515 23:52:31.903059 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/563b0b64-e37e-418e-a9f5-005ffaad0dc2-cni-path\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.904006 kubelet[2570]: I0515 23:52:31.903073 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/563b0b64-e37e-418e-a9f5-005ffaad0dc2-hubble-tls\") pod \"cilium-v5pbh\" (UID: \"563b0b64-e37e-418e-a9f5-005ffaad0dc2\") " pod="kube-system/cilium-v5pbh" May 15 23:52:31.903975 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. May 15 23:52:31.914749 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). May 15 23:52:31.919950 systemd-logind[1468]: Removed session 24. May 15 23:52:31.926707 systemd[1]: Created slice kubepods-burstable-pod563b0b64_e37e_418e_a9f5_005ffaad0dc2.slice - libcontainer container kubepods-burstable-pod563b0b64_e37e_418e_a9f5_005ffaad0dc2.slice. May 15 23:52:31.955713 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:31.958478 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:31.967079 systemd-logind[1468]: New session 25 of user core. May 15 23:52:31.975634 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 23:52:32.025089 sshd[4401]: Connection closed by 10.0.0.1 port 48304 May 15 23:52:32.026408 sshd-session[4398]: pam_unix(sshd:session): session closed for user core May 15 23:52:32.045311 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:48304.service: Deactivated successfully. May 15 23:52:32.048371 systemd[1]: session-25.scope: Deactivated successfully. May 15 23:52:32.049489 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. May 15 23:52:32.064753 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:48314.service - OpenSSH per-connection server daemon (10.0.0.1:48314). May 15 23:52:32.065822 systemd-logind[1468]: Removed session 25. May 15 23:52:32.102318 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:qI3Rqh4GgknoOSa4Ob8LNY71+Sp+5e1kGsNgL9KgYQ8 May 15 23:52:32.103803 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:32.108501 systemd-logind[1468]: New session 26 of user core. May 15 23:52:32.118591 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 23:52:32.235495 containerd[1483]: time="2025-05-15T23:52:32.235059229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5pbh,Uid:563b0b64-e37e-418e-a9f5-005ffaad0dc2,Namespace:kube-system,Attempt:0,}" May 15 23:52:32.252780 containerd[1483]: time="2025-05-15T23:52:32.252239473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:52:32.252780 containerd[1483]: time="2025-05-15T23:52:32.252307273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:52:32.252780 containerd[1483]: time="2025-05-15T23:52:32.252320993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:52:32.252931 containerd[1483]: time="2025-05-15T23:52:32.252780554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:52:32.267617 systemd[1]: Started cri-containerd-681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b.scope - libcontainer container 681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b. May 15 23:52:32.286068 containerd[1483]: time="2025-05-15T23:52:32.285981279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5pbh,Uid:563b0b64-e37e-418e-a9f5-005ffaad0dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\"" May 15 23:52:32.288705 containerd[1483]: time="2025-05-15T23:52:32.288611885Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:52:32.298521 containerd[1483]: time="2025-05-15T23:52:32.298463231Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274\"" May 15 23:52:32.299019 containerd[1483]: time="2025-05-15T23:52:32.298991752Z" level=info msg="StartContainer for \"1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274\"" May 15 23:52:32.324623 systemd[1]: Started cri-containerd-1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274.scope - libcontainer container 1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274. May 15 23:52:32.349720 containerd[1483]: time="2025-05-15T23:52:32.349677521Z" level=info msg="StartContainer for \"1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274\" returns successfully" May 15 23:52:32.382416 systemd[1]: cri-containerd-1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274.scope: Deactivated successfully. May 15 23:52:32.410784 containerd[1483]: time="2025-05-15T23:52:32.410720517Z" level=info msg="shim disconnected" id=1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274 namespace=k8s.io May 15 23:52:32.410784 containerd[1483]: time="2025-05-15T23:52:32.410772238Z" level=warning msg="cleaning up after shim disconnected" id=1581cf692af7ca1c5cf495bfd78c3ea85bca8aa5ad19ddceee5a97523f7bb274 namespace=k8s.io May 15 23:52:32.410784 containerd[1483]: time="2025-05-15T23:52:32.410780718Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:33.130314 containerd[1483]: time="2025-05-15T23:52:33.130180311Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:52:33.142569 containerd[1483]: time="2025-05-15T23:52:33.142505187Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8\"" May 15 23:52:33.144467 containerd[1483]: time="2025-05-15T23:52:33.144023271Z" level=info msg="StartContainer for \"680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8\"" May 15 23:52:33.174631 systemd[1]: Started cri-containerd-680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8.scope - libcontainer container 680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8. May 15 23:52:33.200464 containerd[1483]: time="2025-05-15T23:52:33.200397951Z" level=info msg="StartContainer for \"680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8\" returns successfully" May 15 23:52:33.204384 systemd[1]: cri-containerd-680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8.scope: Deactivated successfully. May 15 23:52:33.224550 containerd[1483]: time="2025-05-15T23:52:33.224469100Z" level=info msg="shim disconnected" id=680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8 namespace=k8s.io May 15 23:52:33.224550 containerd[1483]: time="2025-05-15T23:52:33.224534940Z" level=warning msg="cleaning up after shim disconnected" id=680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8 namespace=k8s.io May 15 23:52:33.224550 containerd[1483]: time="2025-05-15T23:52:33.224543740Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:34.008303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-680140c461a0d56acecec9395388e5e454e4ad7ac1ead19231d42f9a0f7f20b8-rootfs.mount: Deactivated successfully. May 15 23:52:34.132980 containerd[1483]: time="2025-05-15T23:52:34.132929122Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:52:34.148208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960167347.mount: Deactivated successfully. May 15 23:52:34.154042 containerd[1483]: time="2025-05-15T23:52:34.153995308Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8\"" May 15 23:52:34.155600 containerd[1483]: time="2025-05-15T23:52:34.154666390Z" level=info msg="StartContainer for \"840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8\"" May 15 23:52:34.180674 systemd[1]: Started cri-containerd-840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8.scope - libcontainer container 840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8. May 15 23:52:34.204565 systemd[1]: cri-containerd-840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8.scope: Deactivated successfully. May 15 23:52:34.205147 containerd[1483]: time="2025-05-15T23:52:34.204810427Z" level=info msg="StartContainer for \"840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8\" returns successfully" May 15 23:52:34.235450 containerd[1483]: time="2025-05-15T23:52:34.235377922Z" level=info msg="shim disconnected" id=840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8 namespace=k8s.io May 15 23:52:34.235450 containerd[1483]: time="2025-05-15T23:52:34.235428922Z" level=warning msg="cleaning up after shim disconnected" id=840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8 namespace=k8s.io May 15 23:52:34.235450 containerd[1483]: time="2025-05-15T23:52:34.235447002Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:34.980796 kubelet[2570]: E0515 23:52:34.980755 2570 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:52:35.008379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-840fcfd4ac95fc733554d6f6524b649ef0d6fb7798bb09e23c805264160fd8c8-rootfs.mount: Deactivated successfully. May 15 23:52:35.136764 containerd[1483]: time="2025-05-15T23:52:35.136720897Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:52:35.153773 containerd[1483]: time="2025-05-15T23:52:35.153693395Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26\"" May 15 23:52:35.155248 containerd[1483]: time="2025-05-15T23:52:35.154320837Z" level=info msg="StartContainer for \"f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26\"" May 15 23:52:35.188659 systemd[1]: Started cri-containerd-f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26.scope - libcontainer container f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26. May 15 23:52:35.208603 systemd[1]: cri-containerd-f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26.scope: Deactivated successfully. May 15 23:52:35.212640 containerd[1483]: time="2025-05-15T23:52:35.212602195Z" level=info msg="StartContainer for \"f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26\" returns successfully" May 15 23:52:35.214668 containerd[1483]: time="2025-05-15T23:52:35.214565842Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563b0b64_e37e_418e_a9f5_005ffaad0dc2.slice/cri-containerd-f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26.scope/memory.events\": no such file or directory" May 15 23:52:35.230304 containerd[1483]: time="2025-05-15T23:52:35.230251295Z" level=info msg="shim disconnected" id=f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26 namespace=k8s.io May 15 23:52:35.230474 containerd[1483]: time="2025-05-15T23:52:35.230307215Z" level=warning msg="cleaning up after shim disconnected" id=f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26 namespace=k8s.io May 15 23:52:35.230474 containerd[1483]: time="2025-05-15T23:52:35.230316495Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:52:36.008412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f59b7d8ef47c218c59c343fbe195111da4a79d01ff980ce51e402f13e5d8cb26-rootfs.mount: Deactivated successfully. May 15 23:52:36.140485 containerd[1483]: time="2025-05-15T23:52:36.140415906Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:52:36.159303 containerd[1483]: time="2025-05-15T23:52:36.159253175Z" level=info msg="CreateContainer within sandbox \"681a9f90fe59b289b5e1f5ba4535b77c33229ea1d76ed639171e2f00d1b1797b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"501fa19c143042eab5575f37fbe4a91ede593f59bef773a175912609f29f19b4\"" May 15 23:52:36.160077 containerd[1483]: time="2025-05-15T23:52:36.160013778Z" level=info msg="StartContainer for \"501fa19c143042eab5575f37fbe4a91ede593f59bef773a175912609f29f19b4\"" May 15 23:52:36.184617 systemd[1]: Started cri-containerd-501fa19c143042eab5575f37fbe4a91ede593f59bef773a175912609f29f19b4.scope - libcontainer container 501fa19c143042eab5575f37fbe4a91ede593f59bef773a175912609f29f19b4. May 15 23:52:36.209915 containerd[1483]: time="2025-05-15T23:52:36.209701040Z" level=info msg="StartContainer for \"501fa19c143042eab5575f37fbe4a91ede593f59bef773a175912609f29f19b4\" returns successfully" May 15 23:52:36.486461 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 23:52:37.104659 kubelet[2570]: I0515 23:52:37.104598 2570 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:52:37Z","lastTransitionTime":"2025-05-15T23:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:52:37.156637 kubelet[2570]: I0515 23:52:37.156536 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v5pbh" podStartSLOduration=6.156519628 podStartE2EDuration="6.156519628s" podCreationTimestamp="2025-05-15 23:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:52:37.155455744 +0000 UTC m=+82.303057199" watchObservedRunningTime="2025-05-15 23:52:37.156519628 +0000 UTC m=+82.304121083" May 15 23:52:39.402064 systemd-networkd[1410]: lxc_health: Link UP May 15 23:52:39.413114 systemd-networkd[1410]: lxc_health: Gained carrier May 15 23:52:40.576623 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 15 23:52:44.876333 sshd[4414]: Connection closed by 10.0.0.1 port 48314 May 15 23:52:44.876723 sshd-session[4411]: pam_unix(sshd:session): session closed for user core May 15 23:52:44.880268 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:48314.service: Deactivated successfully. May 15 23:52:44.881941 systemd[1]: session-26.scope: Deactivated successfully. May 15 23:52:44.883580 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. May 15 23:52:44.884674 systemd-logind[1468]: Removed session 26.