May 13 23:48:44.922598 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:48:44.922620 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:48:44.922641 kernel: KASLR enabled May 13 23:48:44.922647 kernel: efi: EFI v2.7 by EDK II May 13 23:48:44.922653 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 13 23:48:44.922659 kernel: random: crng init done May 13 23:48:44.922666 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:48:44.922672 kernel: secureboot: Secure boot enabled May 13 23:48:44.922677 kernel: ACPI: Early table checksum verification disabled May 13 23:48:44.922684 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 13 23:48:44.922691 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:48:44.922697 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922703 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922710 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922717 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922725 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922732 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922739 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922745 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922752 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:48:44.922758 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:48:44.922764 kernel: NUMA: Failed to initialise from firmware May 13 23:48:44.922770 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:48:44.922776 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 13 23:48:44.922783 kernel: Zone ranges: May 13 23:48:44.922790 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:48:44.922796 kernel: DMA32 empty May 13 23:48:44.922802 kernel: Normal empty May 13 23:48:44.922808 kernel: Movable zone start for each node May 13 23:48:44.922814 kernel: Early memory node ranges May 13 23:48:44.922820 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 13 23:48:44.922827 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 13 23:48:44.922833 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 13 23:48:44.922839 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 13 23:48:44.922845 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:48:44.922852 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:48:44.922858 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:48:44.922866 kernel: psci: probing for conduit method from ACPI. May 13 23:48:44.922872 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:48:44.922878 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:48:44.922887 kernel: psci: Trusted OS migration not required May 13 23:48:44.922894 kernel: psci: SMC Calling Convention v1.1 May 13 23:48:44.922901 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:48:44.922907 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:48:44.922916 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:48:44.922923 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:48:44.922929 kernel: Detected PIPT I-cache on CPU0 May 13 23:48:44.922936 kernel: CPU features: detected: GIC system register CPU interface May 13 23:48:44.922943 kernel: CPU features: detected: Hardware dirty bit management May 13 23:48:44.922950 kernel: CPU features: detected: Spectre-v4 May 13 23:48:44.922956 kernel: CPU features: detected: Spectre-BHB May 13 23:48:44.922963 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:48:44.922970 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:48:44.922976 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:48:44.922984 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:48:44.922991 kernel: alternatives: applying boot alternatives May 13 23:48:44.922999 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:48:44.923006 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:48:44.923012 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:48:44.923019 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:48:44.923026 kernel: Fallback order for Node 0: 0 May 13 23:48:44.923033 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:48:44.923039 kernel: Policy zone: DMA May 13 23:48:44.923046 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:48:44.923054 kernel: software IO TLB: area num 4. May 13 23:48:44.923061 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 13 23:48:44.923069 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 13 23:48:44.923076 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:48:44.923083 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:48:44.923090 kernel: rcu: RCU event tracing is enabled. May 13 23:48:44.923097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:48:44.923104 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:48:44.923110 kernel: Tracing variant of Tasks RCU enabled. May 13 23:48:44.923117 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:48:44.923124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:48:44.923131 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:48:44.923139 kernel: GICv3: 256 SPIs implemented May 13 23:48:44.923146 kernel: GICv3: 0 Extended SPIs implemented May 13 23:48:44.923152 kernel: Root IRQ handler: gic_handle_irq May 13 23:48:44.923159 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:48:44.923165 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:48:44.923172 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:48:44.923179 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:48:44.923186 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:48:44.923193 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:48:44.923199 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:48:44.923206 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:48:44.923214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:48:44.923221 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:48:44.923228 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:48:44.923235 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:48:44.923241 kernel: arm-pv: using stolen time PV May 13 23:48:44.923248 kernel: Console: colour dummy device 80x25 May 13 23:48:44.923255 kernel: ACPI: Core revision 20230628 May 13 23:48:44.923263 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:48:44.923269 kernel: pid_max: default: 32768 minimum: 301 May 13 23:48:44.923276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:48:44.923295 kernel: landlock: Up and running. May 13 23:48:44.923302 kernel: SELinux: Initializing. May 13 23:48:44.923309 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:48:44.923316 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:48:44.923324 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:48:44.923331 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:48:44.923338 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:48:44.923345 kernel: rcu: Hierarchical SRCU implementation. May 13 23:48:44.923352 kernel: rcu: Max phase no-delay instances is 400. May 13 23:48:44.923361 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:48:44.923368 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:48:44.923374 kernel: Remapping and enabling EFI services. May 13 23:48:44.923381 kernel: smp: Bringing up secondary CPUs ... May 13 23:48:44.923388 kernel: Detected PIPT I-cache on CPU1 May 13 23:48:44.923395 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:48:44.923402 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:48:44.923409 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:48:44.923415 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:48:44.923422 kernel: Detected PIPT I-cache on CPU2 May 13 23:48:44.923431 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:48:44.923438 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:48:44.923450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:48:44.923459 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:48:44.923466 kernel: Detected PIPT I-cache on CPU3 May 13 23:48:44.923473 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:48:44.923480 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:48:44.923488 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:48:44.923495 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:48:44.923502 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:48:44.923509 kernel: SMP: Total of 4 processors activated. May 13 23:48:44.923518 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:48:44.923525 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:48:44.923532 kernel: CPU features: detected: Common not Private translations May 13 23:48:44.923540 kernel: CPU features: detected: CRC32 instructions May 13 23:48:44.923547 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:48:44.923554 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:48:44.923562 kernel: CPU features: detected: LSE atomic instructions May 13 23:48:44.923570 kernel: CPU features: detected: Privileged Access Never May 13 23:48:44.923577 kernel: CPU features: detected: RAS Extension Support May 13 23:48:44.923584 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:48:44.923591 kernel: CPU: All CPU(s) started at EL1 May 13 23:48:44.923598 kernel: alternatives: applying system-wide alternatives May 13 23:48:44.923606 kernel: devtmpfs: initialized May 13 23:48:44.923613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:48:44.923620 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:48:44.923634 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:48:44.923642 kernel: SMBIOS 3.0.0 present. May 13 23:48:44.923649 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:48:44.923657 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:48:44.923664 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:48:44.923672 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:48:44.923679 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:48:44.923687 kernel: audit: initializing netlink subsys (disabled) May 13 23:48:44.923694 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 May 13 23:48:44.923703 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:48:44.923711 kernel: cpuidle: using governor menu May 13 23:48:44.923718 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:48:44.923725 kernel: ASID allocator initialised with 32768 entries May 13 23:48:44.923732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:48:44.923740 kernel: Serial: AMBA PL011 UART driver May 13 23:48:44.923747 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:48:44.923754 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:48:44.923761 kernel: Modules: 509232 pages in range for PLT usage May 13 23:48:44.923770 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:48:44.923778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:48:44.923785 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:48:44.923792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:48:44.923800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:48:44.923807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:48:44.923814 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:48:44.923821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:48:44.923828 kernel: ACPI: Added _OSI(Module Device) May 13 23:48:44.923837 kernel: ACPI: Added _OSI(Processor Device) May 13 23:48:44.923844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:48:44.923851 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:48:44.923858 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:48:44.923865 kernel: ACPI: Interpreter enabled May 13 23:48:44.923873 kernel: ACPI: Using GIC for interrupt routing May 13 23:48:44.923880 kernel: ACPI: MCFG table detected, 1 entries May 13 23:48:44.923887 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:48:44.923894 kernel: printk: console [ttyAMA0] enabled May 13 23:48:44.923903 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:48:44.924057 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:48:44.924137 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:48:44.924207 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:48:44.924273 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:48:44.924357 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:48:44.924367 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:48:44.924377 kernel: PCI host bridge to bus 0000:00 May 13 23:48:44.924454 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:48:44.924517 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:48:44.924579 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:48:44.924650 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:48:44.924736 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:48:44.924816 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:48:44.924893 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:48:44.924964 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:48:44.925032 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:48:44.925101 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:48:44.925176 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:48:44.925246 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:48:44.925341 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:48:44.925421 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:48:44.925479 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:48:44.925488 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:48:44.925496 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:48:44.925503 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:48:44.925510 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:48:44.925517 kernel: iommu: Default domain type: Translated May 13 23:48:44.925524 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:48:44.925533 kernel: efivars: Registered efivars operations May 13 23:48:44.925540 kernel: vgaarb: loaded May 13 23:48:44.925547 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:48:44.925554 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:48:44.925562 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:48:44.925569 kernel: pnp: PnP ACPI init May 13 23:48:44.925658 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:48:44.925669 kernel: pnp: PnP ACPI: found 1 devices May 13 23:48:44.925683 kernel: NET: Registered PF_INET protocol family May 13 23:48:44.925691 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:48:44.925698 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:48:44.925707 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:48:44.925718 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:48:44.925725 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:48:44.925732 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:48:44.925739 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:48:44.925747 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:48:44.925755 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:48:44.925763 kernel: PCI: CLS 0 bytes, default 64 May 13 23:48:44.925769 kernel: kvm [1]: HYP mode not available May 13 23:48:44.925776 kernel: Initialise system trusted keyrings May 13 23:48:44.925783 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:48:44.925790 kernel: Key type asymmetric registered May 13 23:48:44.925797 kernel: Asymmetric key parser 'x509' registered May 13 23:48:44.925804 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:48:44.925811 kernel: io scheduler mq-deadline registered May 13 23:48:44.925820 kernel: io scheduler kyber registered May 13 23:48:44.925827 kernel: io scheduler bfq registered May 13 23:48:44.925834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:48:44.925841 kernel: ACPI: button: Power Button [PWRB] May 13 23:48:44.925849 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:48:44.925919 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:48:44.925929 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:48:44.925936 kernel: thunder_xcv, ver 1.0 May 13 23:48:44.925944 kernel: thunder_bgx, ver 1.0 May 13 23:48:44.925953 kernel: nicpf, ver 1.0 May 13 23:48:44.925960 kernel: nicvf, ver 1.0 May 13 23:48:44.926031 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:48:44.926094 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:48:44 UTC (1747180124) May 13 23:48:44.926104 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:48:44.926111 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:48:44.926118 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:48:44.926125 kernel: watchdog: Hard watchdog permanently disabled May 13 23:48:44.926134 kernel: NET: Registered PF_INET6 protocol family May 13 23:48:44.926141 kernel: Segment Routing with IPv6 May 13 23:48:44.926149 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:48:44.926156 kernel: NET: Registered PF_PACKET protocol family May 13 23:48:44.926163 kernel: Key type dns_resolver registered May 13 23:48:44.926169 kernel: registered taskstats version 1 May 13 23:48:44.926176 kernel: Loading compiled-in X.509 certificates May 13 23:48:44.926183 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:48:44.926190 kernel: Key type .fscrypt registered May 13 23:48:44.926199 kernel: Key type fscrypt-provisioning registered May 13 23:48:44.926205 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:48:44.926212 kernel: ima: Allocated hash algorithm: sha1 May 13 23:48:44.926219 kernel: ima: No architecture policies found May 13 23:48:44.926226 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:48:44.926233 kernel: clk: Disabling unused clocks May 13 23:48:44.926240 kernel: Freeing unused kernel memory: 38464K May 13 23:48:44.926247 kernel: Run /init as init process May 13 23:48:44.926254 kernel: with arguments: May 13 23:48:44.926262 kernel: /init May 13 23:48:44.926269 kernel: with environment: May 13 23:48:44.926275 kernel: HOME=/ May 13 23:48:44.926290 kernel: TERM=linux May 13 23:48:44.926298 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:48:44.926306 systemd[1]: Successfully made /usr/ read-only. May 13 23:48:44.926316 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:48:44.926324 systemd[1]: Detected virtualization kvm. May 13 23:48:44.926333 systemd[1]: Detected architecture arm64. May 13 23:48:44.926340 systemd[1]: Running in initrd. May 13 23:48:44.926348 systemd[1]: No hostname configured, using default hostname. May 13 23:48:44.926356 systemd[1]: Hostname set to . May 13 23:48:44.926363 systemd[1]: Initializing machine ID from VM UUID. May 13 23:48:44.926370 systemd[1]: Queued start job for default target initrd.target. May 13 23:48:44.926378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:48:44.926385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:48:44.926395 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:48:44.926403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:48:44.926411 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:48:44.926419 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:48:44.926428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:48:44.926436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:48:44.926445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:48:44.926453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:48:44.926460 systemd[1]: Reached target paths.target - Path Units. May 13 23:48:44.926468 systemd[1]: Reached target slices.target - Slice Units. May 13 23:48:44.926475 systemd[1]: Reached target swap.target - Swaps. May 13 23:48:44.926483 systemd[1]: Reached target timers.target - Timer Units. May 13 23:48:44.926491 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:48:44.926498 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:48:44.926506 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:48:44.926516 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:48:44.926524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:48:44.926531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:48:44.926539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:48:44.926547 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:48:44.926555 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:48:44.926562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:48:44.926570 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:48:44.926578 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:48:44.926587 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:48:44.926595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:48:44.926603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:48:44.926611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:48:44.926619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:48:44.926635 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:48:44.926643 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:48:44.926651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:44.926682 systemd-journald[238]: Collecting audit messages is disabled. May 13 23:48:44.926707 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:48:44.926716 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:48:44.926725 systemd-journald[238]: Journal started May 13 23:48:44.926746 systemd-journald[238]: Runtime Journal (/run/log/journal/503627cd307c47b1af1c721fbafbad86) is 5.9M, max 47.3M, 41.4M free. May 13 23:48:44.911795 systemd-modules-load[239]: Inserted module 'overlay' May 13 23:48:44.935068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:48:44.935086 kernel: Bridge firewalling registered May 13 23:48:44.935095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:48:44.931833 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 23:48:44.937241 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:48:44.938638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:48:44.942466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:48:44.945996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:48:44.948718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:48:44.952490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:48:44.954567 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:48:44.960904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:48:44.964194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:44.966569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:48:44.973981 dracut-cmdline[273]: dracut-dracut-053 May 13 23:48:44.980265 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:48:45.010607 systemd-resolved[281]: Positive Trust Anchors: May 13 23:48:45.010634 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:48:45.010666 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:48:45.015539 systemd-resolved[281]: Defaulting to hostname 'linux'. May 13 23:48:45.020410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:48:45.021479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:48:45.051323 kernel: SCSI subsystem initialized May 13 23:48:45.055299 kernel: Loading iSCSI transport class v2.0-870. May 13 23:48:45.063317 kernel: iscsi: registered transport (tcp) May 13 23:48:45.077582 kernel: iscsi: registered transport (qla4xxx) May 13 23:48:45.077614 kernel: QLogic iSCSI HBA Driver May 13 23:48:45.118860 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:48:45.120879 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:48:45.150575 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:48:45.150636 kernel: device-mapper: uevent: version 1.0.3 May 13 23:48:45.151839 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:48:45.198304 kernel: raid6: neonx8 gen() 15783 MB/s May 13 23:48:45.215301 kernel: raid6: neonx4 gen() 15578 MB/s May 13 23:48:45.232300 kernel: raid6: neonx2 gen() 13249 MB/s May 13 23:48:45.249300 kernel: raid6: neonx1 gen() 10145 MB/s May 13 23:48:45.266302 kernel: raid6: int64x8 gen() 6599 MB/s May 13 23:48:45.284303 kernel: raid6: int64x4 gen() 7557 MB/s May 13 23:48:45.301303 kernel: raid6: int64x2 gen() 6067 MB/s May 13 23:48:45.318813 kernel: raid6: int64x1 gen() 4999 MB/s May 13 23:48:45.318832 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s May 13 23:48:45.336302 kernel: raid6: .... xor() 11946 MB/s, rmw enabled May 13 23:48:45.336322 kernel: raid6: using neon recovery algorithm May 13 23:48:45.341304 kernel: xor: measuring software checksum speed May 13 23:48:45.342340 kernel: 8regs : 19566 MB/sec May 13 23:48:45.342352 kernel: 32regs : 21687 MB/sec May 13 23:48:45.343391 kernel: arm64_neon : 27946 MB/sec May 13 23:48:45.343403 kernel: xor: using function: arm64_neon (27946 MB/sec) May 13 23:48:45.393314 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:48:45.404267 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:48:45.408404 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:48:45.445261 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 13 23:48:45.449345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:48:45.453442 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:48:45.481117 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 13 23:48:45.515886 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:48:45.519515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:48:45.604308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:48:45.608401 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:48:45.632187 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:48:45.633814 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:48:45.635351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:48:45.636196 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:48:45.639576 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:48:45.660414 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:48:45.661126 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:48:45.665746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:48:45.670951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:48:45.671067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:48:45.680046 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:48:45.680071 kernel: GPT:9289727 != 19775487 May 13 23:48:45.680081 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:48:45.680093 kernel: GPT:9289727 != 19775487 May 13 23:48:45.680102 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:48:45.680113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:48:45.674005 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:48:45.674948 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:48:45.675099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:45.680693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:48:45.686513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:48:45.709335 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 13 23:48:45.712690 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (526) May 13 23:48:45.713358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:45.721372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:48:45.733604 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:48:45.741425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:48:45.747533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:48:45.748656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:48:45.751646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:48:45.754241 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:48:45.775066 disk-uuid[558]: Primary Header is updated. May 13 23:48:45.775066 disk-uuid[558]: Secondary Entries is updated. May 13 23:48:45.775066 disk-uuid[558]: Secondary Header is updated. May 13 23:48:45.785323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:48:45.785387 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:48:46.799313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:48:46.801288 disk-uuid[563]: The operation has completed successfully. May 13 23:48:46.840590 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:48:46.840703 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:48:46.864627 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:48:46.877732 sh[579]: Success May 13 23:48:46.907421 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:48:46.940426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:48:46.942234 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:48:46.952081 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:48:46.959515 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:48:46.959555 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:46.959566 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:48:46.960403 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:48:46.961578 kernel: BTRFS info (device dm-0): using free space tree May 13 23:48:46.964973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:48:46.966309 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:48:46.967166 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:48:46.969332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:48:46.994360 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:46.995779 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:46.995825 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:46.998340 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:47.003324 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:47.006177 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:48:47.008994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:48:47.097148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:48:47.100244 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:48:47.162349 systemd-networkd[769]: lo: Link UP May 13 23:48:47.162356 systemd-networkd[769]: lo: Gained carrier May 13 23:48:47.163165 systemd-networkd[769]: Enumeration completed May 13 23:48:47.163353 ignition[670]: Ignition 2.20.0 May 13 23:48:47.163684 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:47.163360 ignition[670]: Stage: fetch-offline May 13 23:48:47.163688 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:48:47.163396 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:47.164641 systemd-networkd[769]: eth0: Link UP May 13 23:48:47.163404 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:47.164644 systemd-networkd[769]: eth0: Gained carrier May 13 23:48:47.163644 ignition[670]: parsed url from cmdline: "" May 13 23:48:47.164652 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:47.163647 ignition[670]: no config URL provided May 13 23:48:47.166012 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:48:47.163651 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:48:47.169326 systemd[1]: Reached target network.target - Network. May 13 23:48:47.163658 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 13 23:48:47.163682 ignition[670]: op(1): [started] loading QEMU firmware config module May 13 23:48:47.163687 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:48:47.182355 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:48:47.173948 ignition[670]: op(1): [finished] loading QEMU firmware config module May 13 23:48:47.218181 ignition[670]: parsing config with SHA512: c72deeecea4a325f29f578bc97fcd5a37f5b879a52351a94d213ba7fea5239a1cc203f996ecd0bf041f23679b2e55bed7931b8df269eaa5d2e82bd2062ec7075 May 13 23:48:47.224693 unknown[670]: fetched base config from "system" May 13 23:48:47.224704 unknown[670]: fetched user config from "qemu" May 13 23:48:47.225453 ignition[670]: fetch-offline: fetch-offline passed May 13 23:48:47.226638 ignition[670]: Ignition finished successfully May 13 23:48:47.229136 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:48:47.230645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:48:47.231517 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:48:47.258811 ignition[778]: Ignition 2.20.0 May 13 23:48:47.258823 ignition[778]: Stage: kargs May 13 23:48:47.258972 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:47.258982 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:47.259905 ignition[778]: kargs: kargs passed May 13 23:48:47.259956 ignition[778]: Ignition finished successfully May 13 23:48:47.262262 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:48:47.264465 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:48:47.287790 ignition[786]: Ignition 2.20.0 May 13 23:48:47.287801 ignition[786]: Stage: disks May 13 23:48:47.287956 ignition[786]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:47.287975 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:47.288895 ignition[786]: disks: disks passed May 13 23:48:47.290297 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:48:47.288940 ignition[786]: Ignition finished successfully May 13 23:48:47.291439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:48:47.292412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:48:47.293895 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:48:47.295057 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:48:47.296535 systemd[1]: Reached target basic.target - Basic System. May 13 23:48:47.298948 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:48:47.322126 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:48:47.326038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:48:47.328100 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:48:47.393318 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:48:47.393522 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:48:47.394983 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:48:47.397235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:48:47.399098 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:48:47.399982 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:48:47.400030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:48:47.400057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:48:47.413182 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:48:47.415430 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:48:47.423639 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) May 13 23:48:47.423696 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:47.423707 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:47.423717 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:47.428426 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:47.429819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:48:47.469278 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:48:47.472641 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory May 13 23:48:47.476176 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:48:47.479612 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:48:47.560517 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:48:47.562646 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:48:47.564037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:48:47.585315 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:47.602514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:48:47.612480 ignition[918]: INFO : Ignition 2.20.0 May 13 23:48:47.612480 ignition[918]: INFO : Stage: mount May 13 23:48:47.614625 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:47.614625 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:47.614625 ignition[918]: INFO : mount: mount passed May 13 23:48:47.614625 ignition[918]: INFO : Ignition finished successfully May 13 23:48:47.615384 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:48:47.617997 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:48:47.958852 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:48:47.960443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:48:47.995713 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (932) May 13 23:48:47.995757 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:47.995768 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:47.996420 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:47.999295 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:48.000203 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:48:48.023885 ignition[949]: INFO : Ignition 2.20.0 May 13 23:48:48.023885 ignition[949]: INFO : Stage: files May 13 23:48:48.025250 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:48.025250 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:48.025250 ignition[949]: DEBUG : files: compiled without relabeling support, skipping May 13 23:48:48.028248 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:48:48.028248 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:48:48.028248 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:48:48.028248 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:48:48.032152 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:48:48.032152 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:48:48.032152 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:48:48.028493 unknown[949]: wrote ssh authorized keys file for user: core May 13 23:48:48.084390 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:48:48.648810 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:48:48.648810 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:48:48.652193 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 23:48:49.006170 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:48:49.032522 systemd-networkd[769]: eth0: Gained IPv6LL May 13 23:48:49.071310 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:48:49.073120 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 23:48:49.318429 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:48:49.700148 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:48:49.700148 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 23:48:49.703265 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:48:49.716883 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:48:49.720474 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:48:49.722790 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:48:49.722790 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 23:48:49.722790 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:48:49.722790 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:48:49.722790 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:48:49.722790 ignition[949]: INFO : files: files passed May 13 23:48:49.722790 ignition[949]: INFO : Ignition finished successfully May 13 23:48:49.723269 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:48:49.725899 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:48:49.728450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:48:49.746220 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:48:49.746327 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:48:49.749240 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:48:49.750360 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:49.750360 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:49.752872 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:49.752372 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:48:49.756249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:48:49.758604 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:48:49.809347 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:48:49.809514 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:48:49.811841 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:48:49.813072 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:48:49.814504 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:48:49.815340 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:48:49.841499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:48:49.844001 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:48:49.868281 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:48:49.869246 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:48:49.871046 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:48:49.872384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:48:49.872505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:48:49.874432 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:48:49.875923 systemd[1]: Stopped target basic.target - Basic System. May 13 23:48:49.877125 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:48:49.878458 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:48:49.879851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:48:49.881267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:48:49.882801 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:48:49.884182 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:48:49.885801 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:48:49.887022 systemd[1]: Stopped target swap.target - Swaps. May 13 23:48:49.888178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:48:49.888323 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:48:49.890168 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:48:49.891747 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:48:49.893210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:48:49.896348 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:48:49.897317 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:48:49.897445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:48:49.899938 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:48:49.900057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:48:49.901614 systemd[1]: Stopped target paths.target - Path Units. May 13 23:48:49.902801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:48:49.906349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:48:49.907355 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:48:49.909103 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:48:49.910383 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:48:49.910471 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:48:49.911839 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:48:49.911929 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:48:49.913157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:48:49.913270 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:48:49.914640 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:48:49.914745 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:48:49.916630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:48:49.917755 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:48:49.917875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:48:49.920337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:48:49.921533 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:48:49.921669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:48:49.923122 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:48:49.923225 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:48:49.930166 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:48:49.931361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:48:49.935549 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:48:49.937373 ignition[1005]: INFO : Ignition 2.20.0 May 13 23:48:49.937373 ignition[1005]: INFO : Stage: umount May 13 23:48:49.939375 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:49.939375 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:49.939375 ignition[1005]: INFO : umount: umount passed May 13 23:48:49.939375 ignition[1005]: INFO : Ignition finished successfully May 13 23:48:49.940219 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:48:49.940389 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:48:49.942068 systemd[1]: Stopped target network.target - Network. May 13 23:48:49.943016 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:48:49.943075 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:48:49.944342 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:48:49.944383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:48:49.945678 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:48:49.945719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:48:49.946993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:48:49.947031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:48:49.948296 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:48:49.949646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:48:49.957551 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:48:49.957689 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:48:49.961442 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:48:49.961689 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:48:49.961794 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:48:49.964478 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:48:49.965040 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:48:49.965087 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:48:49.967112 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:48:49.968451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:48:49.968509 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:48:49.969988 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:48:49.970029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:49.972484 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:48:49.972529 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:48:49.974049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:48:49.974089 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:48:49.976405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:48:49.980223 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:48:49.980349 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:48:49.991421 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:48:49.991569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:48:49.994042 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:48:49.994187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:48:49.996032 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:48:49.996085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:48:49.997506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:48:49.997540 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:48:49.998869 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:48:49.998920 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:48:50.000939 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:48:50.000985 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:48:50.003107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:48:50.003157 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:48:50.006216 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:48:50.007644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:48:50.007696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:48:50.010162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:48:50.010201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:50.013416 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:48:50.013469 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:48:50.016517 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:48:50.016613 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:48:50.018505 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:48:50.018602 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:48:50.021444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:48:50.021552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:48:50.023210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:48:50.025243 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:48:50.052753 systemd[1]: Switching root. May 13 23:48:50.084384 systemd-journald[238]: Journal stopped May 13 23:48:50.895456 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 23:48:50.895516 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:48:50.895529 kernel: SELinux: policy capability open_perms=1 May 13 23:48:50.895539 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:48:50.895557 kernel: SELinux: policy capability always_check_network=0 May 13 23:48:50.895568 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:48:50.895578 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:48:50.895596 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:48:50.895606 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:48:50.895615 kernel: audit: type=1403 audit(1747180130.251:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:48:50.895630 systemd[1]: Successfully loaded SELinux policy in 37.277ms. May 13 23:48:50.895647 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.923ms. May 13 23:48:50.895658 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:48:50.895671 systemd[1]: Detected virtualization kvm. May 13 23:48:50.895682 systemd[1]: Detected architecture arm64. May 13 23:48:50.895692 systemd[1]: Detected first boot. May 13 23:48:50.895708 systemd[1]: Initializing machine ID from VM UUID. May 13 23:48:50.895719 zram_generator::config[1054]: No configuration found. May 13 23:48:50.895730 kernel: NET: Registered PF_VSOCK protocol family May 13 23:48:50.895740 systemd[1]: Populated /etc with preset unit settings. May 13 23:48:50.895751 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:48:50.895762 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:48:50.895773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:48:50.895783 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:48:50.895796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:48:50.895807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:48:50.895818 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:48:50.895829 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:48:50.895840 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:48:50.895851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:48:50.895862 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:48:50.895873 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:48:50.895885 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:48:50.895896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:48:50.895907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:48:50.895918 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:48:50.895931 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:48:50.895943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:48:50.895954 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:48:50.895965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:48:50.895976 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:48:50.895988 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:48:50.895998 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:48:50.896009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:48:50.896019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:48:50.896030 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:48:50.896041 systemd[1]: Reached target slices.target - Slice Units. May 13 23:48:50.896052 systemd[1]: Reached target swap.target - Swaps. May 13 23:48:50.896063 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:48:50.896075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:48:50.896086 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:48:50.896096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:48:50.896108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:48:50.896119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:48:50.896129 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:48:50.896139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:48:50.896150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:48:50.896160 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:48:50.896173 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:48:50.896184 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:48:50.896194 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:48:50.896206 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:48:50.896216 systemd[1]: Reached target machines.target - Containers. May 13 23:48:50.896226 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:48:50.896237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:50.896248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:48:50.896259 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:48:50.896271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:50.896402 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:48:50.896422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:50.896434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:48:50.896444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:50.896455 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:48:50.896466 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:48:50.896477 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:48:50.896491 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:48:50.896502 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:48:50.896514 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:50.896525 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:48:50.896536 kernel: fuse: init (API version 7.39) May 13 23:48:50.896552 kernel: loop: module loaded May 13 23:48:50.896563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:48:50.896575 kernel: ACPI: bus type drm_connector registered May 13 23:48:50.896585 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:48:50.896597 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:48:50.896609 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:48:50.896620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:48:50.896631 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:48:50.896641 systemd[1]: Stopped verity-setup.service. May 13 23:48:50.896653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:48:50.896667 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:48:50.896678 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:48:50.896689 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:48:50.896700 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:48:50.896711 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:48:50.896722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:48:50.896733 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:48:50.896747 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:48:50.896758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:50.896791 systemd-journald[1117]: Collecting audit messages is disabled. May 13 23:48:50.896817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:50.896828 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:48:50.896842 systemd-journald[1117]: Journal started May 13 23:48:50.896864 systemd-journald[1117]: Runtime Journal (/run/log/journal/503627cd307c47b1af1c721fbafbad86) is 5.9M, max 47.3M, 41.4M free. May 13 23:48:50.680186 systemd[1]: Queued start job for default target multi-user.target. May 13 23:48:50.689236 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:48:50.689649 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:48:50.897964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:48:50.900570 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:48:50.901399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:50.901608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:50.903032 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:48:50.903210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:48:50.904454 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:50.904630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:50.907312 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:48:50.908539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:48:50.909713 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:48:50.911103 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:48:50.912467 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:48:50.925108 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:48:50.927905 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:48:50.929934 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:48:50.930936 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:48:50.930972 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:48:50.932699 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:48:50.943223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:48:50.945246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:48:50.946187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:50.948402 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:48:50.951049 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:48:50.952161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:48:50.953119 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:48:50.955250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:48:50.956398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:48:50.959465 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:48:50.964567 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:48:50.968307 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:48:50.969123 systemd-journald[1117]: Time spent on flushing to /var/log/journal/503627cd307c47b1af1c721fbafbad86 is 18.520ms for 871 entries. May 13 23:48:50.969123 systemd-journald[1117]: System Journal (/var/log/journal/503627cd307c47b1af1c721fbafbad86) is 8M, max 195.6M, 187.6M free. May 13 23:48:50.992678 systemd-journald[1117]: Received client request to flush runtime journal. May 13 23:48:50.978607 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:48:50.979886 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:48:50.981389 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:48:50.982839 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:48:50.988663 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:48:50.994465 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:48:50.997307 kernel: loop0: detected capacity change from 0 to 126448 May 13 23:48:50.997474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:48:51.005318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:48:51.022310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:48:51.024775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:51.031829 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:48:51.042187 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:48:51.047426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:48:51.050374 kernel: loop1: detected capacity change from 0 to 189592 May 13 23:48:51.063251 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:48:51.080311 kernel: loop2: detected capacity change from 0 to 103832 May 13 23:48:51.084415 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 13 23:48:51.084434 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 13 23:48:51.089707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:48:51.120314 kernel: loop3: detected capacity change from 0 to 126448 May 13 23:48:51.126307 kernel: loop4: detected capacity change from 0 to 189592 May 13 23:48:51.135309 kernel: loop5: detected capacity change from 0 to 103832 May 13 23:48:51.140975 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:48:51.141429 (sd-merge)[1195]: Merged extensions into '/usr'. May 13 23:48:51.144782 systemd[1]: Reload requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:48:51.144796 systemd[1]: Reloading... May 13 23:48:51.197974 zram_generator::config[1222]: No configuration found. May 13 23:48:51.304150 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:48:51.311928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:51.362366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:48:51.362818 systemd[1]: Reloading finished in 217 ms. May 13 23:48:51.380346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:48:51.381621 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:48:51.397762 systemd[1]: Starting ensure-sysext.service... May 13 23:48:51.399561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:48:51.415471 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... May 13 23:48:51.415488 systemd[1]: Reloading... May 13 23:48:51.417715 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:48:51.418249 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:48:51.419062 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:48:51.419405 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. May 13 23:48:51.419559 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. May 13 23:48:51.422711 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:48:51.422840 systemd-tmpfiles[1258]: Skipping /boot May 13 23:48:51.432734 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:48:51.432883 systemd-tmpfiles[1258]: Skipping /boot May 13 23:48:51.467351 zram_generator::config[1290]: No configuration found. May 13 23:48:51.549028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:51.600245 systemd[1]: Reloading finished in 184 ms. May 13 23:48:51.617325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:48:51.623646 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:48:51.637099 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:48:51.639689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:48:51.653253 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:48:51.659444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:48:51.662149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:48:51.665507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:48:51.673500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:48:51.677728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:51.682459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:51.687742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:51.692152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:51.693327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:51.693464 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:51.696411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:48:51.701462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:51.701717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:51.703596 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:51.703775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:51.705629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:51.705809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:51.716944 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:48:51.720022 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:48:51.724751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:51.726272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:51.730000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:51.737551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:51.740524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:51.740692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:51.746460 systemd-udevd[1328]: Using default interface naming scheme 'v255'. May 13 23:48:51.749560 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:48:51.750597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:48:51.753449 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:48:51.755599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:51.757395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:51.758965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:51.759134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:51.760922 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:51.761091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:51.766771 augenrules[1367]: No rules May 13 23:48:51.767634 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:48:51.769333 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:48:51.770674 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:48:51.778029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:51.779441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:51.782583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:48:51.790428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:51.799497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:51.800678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:51.800730 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:51.800789 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:48:51.801100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:48:51.802939 systemd[1]: Finished ensure-sysext.service. May 13 23:48:51.804197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:51.805579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:51.807076 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:48:51.808329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:48:51.811823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:51.812008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:51.822141 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:51.824369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:51.830493 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:48:51.831391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:48:51.831468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:48:51.833200 systemd-resolved[1326]: Positive Trust Anchors: May 13 23:48:51.833222 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:48:51.833253 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:48:51.835076 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:48:51.837983 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:48:51.843799 systemd-resolved[1326]: Defaulting to hostname 'linux'. May 13 23:48:51.847594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:48:51.849506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:48:51.885338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1397) May 13 23:48:51.917245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:48:51.918735 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:48:51.920046 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:48:51.922259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:48:51.940497 systemd-networkd[1405]: lo: Link UP May 13 23:48:51.940506 systemd-networkd[1405]: lo: Gained carrier May 13 23:48:51.941562 systemd-networkd[1405]: Enumeration completed May 13 23:48:51.941734 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:48:51.943117 systemd[1]: Reached target network.target - Network. May 13 23:48:51.945832 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:48:51.948063 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:48:51.950400 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:51.950408 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:48:51.951062 systemd-networkd[1405]: eth0: Link UP May 13 23:48:51.951066 systemd-networkd[1405]: eth0: Gained carrier May 13 23:48:51.951081 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:51.963949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:48:51.968433 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:48:51.969089 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. May 13 23:48:52.442753 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:48:52.442803 systemd-timesyncd[1406]: Initial clock synchronization to Tue 2025-05-13 23:48:52.442664 UTC. May 13 23:48:52.442945 systemd-resolved[1326]: Clock change detected. Flushing caches. May 13 23:48:52.448427 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:48:52.459532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:48:52.474698 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:48:52.477480 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:48:52.498658 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:48:52.520648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:52.534695 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:48:52.535946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:48:52.536926 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:48:52.537944 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:48:52.539006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:48:52.540248 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:48:52.541439 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:48:52.542486 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:48:52.543514 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:48:52.543552 systemd[1]: Reached target paths.target - Path Units. May 13 23:48:52.544262 systemd[1]: Reached target timers.target - Timer Units. May 13 23:48:52.546037 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:48:52.548391 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:48:52.551439 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:48:52.552742 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:48:52.553811 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:48:52.559175 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:48:52.560485 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:48:52.562639 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:48:52.564055 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:48:52.565065 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:48:52.565929 systemd[1]: Reached target basic.target - Basic System. May 13 23:48:52.566751 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:48:52.566782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:48:52.567835 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:48:52.569664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:48:52.572425 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:48:52.573459 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:48:52.576442 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:48:52.577819 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:48:52.579136 jq[1438]: false May 13 23:48:52.579498 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:48:52.582458 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:48:52.590040 dbus-daemon[1437]: [system] SELinux support is enabled May 13 23:48:52.593364 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:48:52.596766 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:48:52.601076 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:48:52.604442 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:48:52.604978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:48:52.605661 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:48:52.609858 extend-filesystems[1439]: Found loop3 May 13 23:48:52.609858 extend-filesystems[1439]: Found loop4 May 13 23:48:52.609858 extend-filesystems[1439]: Found loop5 May 13 23:48:52.609858 extend-filesystems[1439]: Found vda May 13 23:48:52.609858 extend-filesystems[1439]: Found vda1 May 13 23:48:52.609858 extend-filesystems[1439]: Found vda2 May 13 23:48:52.609858 extend-filesystems[1439]: Found vda3 May 13 23:48:52.609858 extend-filesystems[1439]: Found usr May 13 23:48:52.609858 extend-filesystems[1439]: Found vda4 May 13 23:48:52.609858 extend-filesystems[1439]: Found vda6 May 13 23:48:52.609858 extend-filesystems[1439]: Found vda7 May 13 23:48:52.629367 extend-filesystems[1439]: Found vda9 May 13 23:48:52.629367 extend-filesystems[1439]: Checking size of /dev/vda9 May 13 23:48:52.612362 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:48:52.615098 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:48:52.620940 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:48:52.624655 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:48:52.631950 jq[1456]: true May 13 23:48:52.624835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:48:52.625093 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:48:52.625253 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:48:52.640634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:48:52.640830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:48:52.652945 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:48:52.654139 jq[1460]: true May 13 23:48:52.659701 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:48:52.659755 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:48:52.662634 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:48:52.662661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:48:52.665623 tar[1459]: linux-arm64/helm May 13 23:48:52.671960 update_engine[1453]: I20250513 23:48:52.666566 1453 main.cc:92] Flatcar Update Engine starting May 13 23:48:52.673965 systemd[1]: Started update-engine.service - Update Engine. May 13 23:48:52.677422 update_engine[1453]: I20250513 23:48:52.674047 1453 update_check_scheduler.cc:74] Next update check in 4m22s May 13 23:48:52.677648 extend-filesystems[1439]: Resized partition /dev/vda9 May 13 23:48:52.677938 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:48:52.693871 extend-filesystems[1476]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:48:52.706257 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1380) May 13 23:48:52.706311 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:48:52.701628 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:48:52.706783 systemd-logind[1450]: New seat seat0. May 13 23:48:52.708534 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:48:52.790325 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:48:52.803661 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:48:52.865288 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:48:52.865288 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:48:52.865288 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:48:52.868122 extend-filesystems[1439]: Resized filesystem in /dev/vda9 May 13 23:48:52.866476 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:48:52.866727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:48:52.874316 bash[1491]: Updated "/home/core/.ssh/authorized_keys" May 13 23:48:52.876072 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:48:52.878332 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:48:52.930629 containerd[1461]: time="2025-05-13T23:48:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:48:52.931598 containerd[1461]: time="2025-05-13T23:48:52.931558616Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:48:52.938204 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:48:52.943763 containerd[1461]: time="2025-05-13T23:48:52.943713336Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.28µs" May 13 23:48:52.943763 containerd[1461]: time="2025-05-13T23:48:52.943756336Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:48:52.943884 containerd[1461]: time="2025-05-13T23:48:52.943778936Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:48:52.943962 containerd[1461]: time="2025-05-13T23:48:52.943939416Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:48:52.944001 containerd[1461]: time="2025-05-13T23:48:52.943961616Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:48:52.944001 containerd[1461]: time="2025-05-13T23:48:52.943991736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:48:52.944078 containerd[1461]: time="2025-05-13T23:48:52.944054216Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:48:52.944078 containerd[1461]: time="2025-05-13T23:48:52.944071936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:48:52.944432 containerd[1461]: time="2025-05-13T23:48:52.944408416Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:48:52.944432 containerd[1461]: time="2025-05-13T23:48:52.944430496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:48:52.944491 containerd[1461]: time="2025-05-13T23:48:52.944443336Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:48:52.944491 containerd[1461]: time="2025-05-13T23:48:52.944451576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:48:52.944558 containerd[1461]: time="2025-05-13T23:48:52.944539056Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:48:52.944767 containerd[1461]: time="2025-05-13T23:48:52.944744936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:48:52.944799 containerd[1461]: time="2025-05-13T23:48:52.944778936Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:48:52.944799 containerd[1461]: time="2025-05-13T23:48:52.944789096Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:48:52.944833 containerd[1461]: time="2025-05-13T23:48:52.944820136Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:48:52.945120 containerd[1461]: time="2025-05-13T23:48:52.945100456Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:48:52.945184 containerd[1461]: time="2025-05-13T23:48:52.945168136Z" level=info msg="metadata content store policy set" policy=shared May 13 23:48:52.948349 containerd[1461]: time="2025-05-13T23:48:52.948309616Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:48:52.948445 containerd[1461]: time="2025-05-13T23:48:52.948377696Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:48:52.948445 containerd[1461]: time="2025-05-13T23:48:52.948394976Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:48:52.948445 containerd[1461]: time="2025-05-13T23:48:52.948408496Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:48:52.948497 containerd[1461]: time="2025-05-13T23:48:52.948452776Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:48:52.948497 containerd[1461]: time="2025-05-13T23:48:52.948469656Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:48:52.948497 containerd[1461]: time="2025-05-13T23:48:52.948482576Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:48:52.948563 containerd[1461]: time="2025-05-13T23:48:52.948496656Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:48:52.948563 containerd[1461]: time="2025-05-13T23:48:52.948508256Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:48:52.948563 containerd[1461]: time="2025-05-13T23:48:52.948531176Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:48:52.948563 containerd[1461]: time="2025-05-13T23:48:52.948544416Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:48:52.948563 containerd[1461]: time="2025-05-13T23:48:52.948557136Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:48:52.948721 containerd[1461]: time="2025-05-13T23:48:52.948697776Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:48:52.948751 containerd[1461]: time="2025-05-13T23:48:52.948725056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:48:52.948751 containerd[1461]: time="2025-05-13T23:48:52.948742616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:48:52.948802 containerd[1461]: time="2025-05-13T23:48:52.948755816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:48:52.948802 containerd[1461]: time="2025-05-13T23:48:52.948768416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:48:52.948836 containerd[1461]: time="2025-05-13T23:48:52.948803376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:48:52.948836 containerd[1461]: time="2025-05-13T23:48:52.948818296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:48:52.948836 containerd[1461]: time="2025-05-13T23:48:52.948830736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:48:52.948900 containerd[1461]: time="2025-05-13T23:48:52.948843736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:48:52.948900 containerd[1461]: time="2025-05-13T23:48:52.948855736Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:48:52.948900 containerd[1461]: time="2025-05-13T23:48:52.948866496Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:48:52.949344 containerd[1461]: time="2025-05-13T23:48:52.949324496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:48:52.949388 containerd[1461]: time="2025-05-13T23:48:52.949346776Z" level=info msg="Start snapshots syncer" May 13 23:48:52.949388 containerd[1461]: time="2025-05-13T23:48:52.949384176Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:48:52.950419 containerd[1461]: time="2025-05-13T23:48:52.950368696Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:48:52.950550 containerd[1461]: time="2025-05-13T23:48:52.950460256Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:48:52.951038 containerd[1461]: time="2025-05-13T23:48:52.951005656Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:48:52.951183 containerd[1461]: time="2025-05-13T23:48:52.951161616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:48:52.951215 containerd[1461]: time="2025-05-13T23:48:52.951197896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:48:52.951234 containerd[1461]: time="2025-05-13T23:48:52.951215536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:48:52.951234 containerd[1461]: time="2025-05-13T23:48:52.951227976Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:48:52.951281 containerd[1461]: time="2025-05-13T23:48:52.951245736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:48:52.951281 containerd[1461]: time="2025-05-13T23:48:52.951261536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:48:52.951320 containerd[1461]: time="2025-05-13T23:48:52.951297896Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:48:52.951347 containerd[1461]: time="2025-05-13T23:48:52.951333256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:48:52.951366 containerd[1461]: time="2025-05-13T23:48:52.951351976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:48:52.951384 containerd[1461]: time="2025-05-13T23:48:52.951366416Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951408536Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951432816Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951446216Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951460016Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951468656Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951482416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:48:52.951565 containerd[1461]: time="2025-05-13T23:48:52.951498256Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:48:52.951696 containerd[1461]: time="2025-05-13T23:48:52.951589536Z" level=info msg="runtime interface created" May 13 23:48:52.951696 containerd[1461]: time="2025-05-13T23:48:52.951599696Z" level=info msg="created NRI interface" May 13 23:48:52.951696 containerd[1461]: time="2025-05-13T23:48:52.951609096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:48:52.951696 containerd[1461]: time="2025-05-13T23:48:52.951627776Z" level=info msg="Connect containerd service" May 13 23:48:52.951696 containerd[1461]: time="2025-05-13T23:48:52.951661336Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:48:52.952705 containerd[1461]: time="2025-05-13T23:48:52.952600816Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:48:52.959186 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:48:52.965606 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:48:52.989606 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:48:52.989866 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:48:52.993956 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:48:53.015426 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:48:53.021902 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:48:53.025701 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:48:53.028087 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101523256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101595256Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101627216Z" level=info msg="Start subscribing containerd event" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101665976Z" level=info msg="Start recovering state" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101751936Z" level=info msg="Start event monitor" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101769536Z" level=info msg="Start cni network conf syncer for default" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101778256Z" level=info msg="Start streaming server" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101786736Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101793096Z" level=info msg="runtime interface starting up..." May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101799416Z" level=info msg="starting plugins..." May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101814496Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:48:53.103312 containerd[1461]: time="2025-05-13T23:48:53.101941096Z" level=info msg="containerd successfully booted in 0.171980s" May 13 23:48:53.102075 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:48:53.179181 tar[1459]: linux-arm64/LICENSE May 13 23:48:53.179181 tar[1459]: linux-arm64/README.md May 13 23:48:53.204104 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:48:53.600527 systemd-networkd[1405]: eth0: Gained IPv6LL May 13 23:48:53.605330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:48:53.607141 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:48:53.610250 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:48:53.613009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:53.615298 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:48:53.652544 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:48:53.656006 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:48:53.656484 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:48:53.660067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:48:54.371266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:54.374460 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:48:54.379329 systemd[1]: Startup finished in 603ms (kernel) + 5.538s (initrd) + 3.693s (userspace) = 9.835s. May 13 23:48:54.390924 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:48:55.260495 kubelet[1563]: E0513 23:48:55.260430 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:48:55.263303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:48:55.263461 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:48:55.263955 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 234.4M memory peak. May 13 23:48:57.820511 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:48:57.821888 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:46726.service - OpenSSH per-connection server daemon (10.0.0.1:46726). May 13 23:48:57.909392 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 46726 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:57.911376 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:57.918037 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:48:57.919025 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:48:57.923825 systemd-logind[1450]: New session 1 of user core. May 13 23:48:57.940106 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:48:57.944568 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:48:57.984468 (systemd)[1581]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:48:57.987168 systemd-logind[1450]: New session c1 of user core. May 13 23:48:58.099970 systemd[1581]: Queued start job for default target default.target. May 13 23:48:58.112293 systemd[1581]: Created slice app.slice - User Application Slice. May 13 23:48:58.112322 systemd[1581]: Reached target paths.target - Paths. May 13 23:48:58.112358 systemd[1581]: Reached target timers.target - Timers. May 13 23:48:58.113597 systemd[1581]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:48:58.123052 systemd[1581]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:48:58.123121 systemd[1581]: Reached target sockets.target - Sockets. May 13 23:48:58.123162 systemd[1581]: Reached target basic.target - Basic System. May 13 23:48:58.123191 systemd[1581]: Reached target default.target - Main User Target. May 13 23:48:58.123216 systemd[1581]: Startup finished in 129ms. May 13 23:48:58.123390 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:48:58.124684 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:48:58.189788 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:46742.service - OpenSSH per-connection server daemon (10.0.0.1:46742). May 13 23:48:58.249013 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 46742 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:58.250391 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.254644 systemd-logind[1450]: New session 2 of user core. May 13 23:48:58.268488 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:48:58.321347 sshd[1594]: Connection closed by 10.0.0.1 port 46742 May 13 23:48:58.321185 sshd-session[1592]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.341713 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:46742.service: Deactivated successfully. May 13 23:48:58.343079 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:48:58.343809 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. May 13 23:48:58.345483 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:46758.service - OpenSSH per-connection server daemon (10.0.0.1:46758). May 13 23:48:58.346228 systemd-logind[1450]: Removed session 2. May 13 23:48:58.390560 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 46758 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:58.391875 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.396226 systemd-logind[1450]: New session 3 of user core. May 13 23:48:58.408448 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:48:58.457039 sshd[1602]: Connection closed by 10.0.0.1 port 46758 May 13 23:48:58.457514 sshd-session[1599]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.467224 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:46758.service: Deactivated successfully. May 13 23:48:58.468661 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:48:58.470413 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. May 13 23:48:58.471433 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:46764.service - OpenSSH per-connection server daemon (10.0.0.1:46764). May 13 23:48:58.472550 systemd-logind[1450]: Removed session 3. May 13 23:48:58.523308 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:58.524576 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.528626 systemd-logind[1450]: New session 4 of user core. May 13 23:48:58.539422 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:48:58.590778 sshd[1610]: Connection closed by 10.0.0.1 port 46764 May 13 23:48:58.591168 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.601249 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:46764.service: Deactivated successfully. May 13 23:48:58.603711 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:48:58.604894 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. May 13 23:48:58.605999 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:46778.service - OpenSSH per-connection server daemon (10.0.0.1:46778). May 13 23:48:58.606740 systemd-logind[1450]: Removed session 4. May 13 23:48:58.654430 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 46778 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:58.655715 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.659527 systemd-logind[1450]: New session 5 of user core. May 13 23:48:58.675469 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:48:58.735261 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:48:58.735918 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:58.750224 sudo[1619]: pam_unix(sudo:session): session closed for user root May 13 23:48:58.751789 sshd[1618]: Connection closed by 10.0.0.1 port 46778 May 13 23:48:58.752373 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.771582 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:46778.service: Deactivated successfully. May 13 23:48:58.773105 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:48:58.773907 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. May 13 23:48:58.775704 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:46784.service - OpenSSH per-connection server daemon (10.0.0.1:46784). May 13 23:48:58.777538 systemd-logind[1450]: Removed session 5. May 13 23:48:58.825427 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 46784 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:58.826877 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:58.832166 systemd-logind[1450]: New session 6 of user core. May 13 23:48:58.844510 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:48:58.900491 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:48:58.900763 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:58.904504 sudo[1629]: pam_unix(sudo:session): session closed for user root May 13 23:48:58.911542 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:48:58.911842 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:58.921598 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:48:58.960692 augenrules[1651]: No rules May 13 23:48:58.962126 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:48:58.962412 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:48:58.964548 sudo[1628]: pam_unix(sudo:session): session closed for user root May 13 23:48:58.967428 sshd[1627]: Connection closed by 10.0.0.1 port 46784 May 13 23:48:58.967847 sshd-session[1624]: pam_unix(sshd:session): session closed for user core May 13 23:48:58.980290 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:46784.service: Deactivated successfully. May 13 23:48:58.982499 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:48:58.983241 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. May 13 23:48:58.985587 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). May 13 23:48:58.986508 systemd-logind[1450]: Removed session 6. May 13 23:48:59.037854 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:59.039101 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:59.043458 systemd-logind[1450]: New session 7 of user core. May 13 23:48:59.054461 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:48:59.105220 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:48:59.105846 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:59.552508 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:48:59.568629 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:48:59.837074 dockerd[1683]: time="2025-05-13T23:48:59.836870136Z" level=info msg="Starting up" May 13 23:48:59.838450 dockerd[1683]: time="2025-05-13T23:48:59.838409576Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:48:59.929646 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3655836388-merged.mount: Deactivated successfully. May 13 23:48:59.952847 dockerd[1683]: time="2025-05-13T23:48:59.952798176Z" level=info msg="Loading containers: start." May 13 23:49:00.134309 kernel: Initializing XFRM netlink socket May 13 23:49:00.194935 systemd-networkd[1405]: docker0: Link UP May 13 23:49:00.256793 dockerd[1683]: time="2025-05-13T23:49:00.256737416Z" level=info msg="Loading containers: done." May 13 23:49:00.271796 dockerd[1683]: time="2025-05-13T23:49:00.271735216Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:49:00.271979 dockerd[1683]: time="2025-05-13T23:49:00.271836736Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:49:00.272031 dockerd[1683]: time="2025-05-13T23:49:00.272012816Z" level=info msg="Daemon has completed initialization" May 13 23:49:00.310013 dockerd[1683]: time="2025-05-13T23:49:00.309929776Z" level=info msg="API listen on /run/docker.sock" May 13 23:49:00.310116 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:49:00.963023 containerd[1461]: time="2025-05-13T23:49:00.962940456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:49:01.628882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052332962.mount: Deactivated successfully. May 13 23:49:02.493355 containerd[1461]: time="2025-05-13T23:49:02.493302776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:02.493965 containerd[1461]: time="2025-05-13T23:49:02.493904896Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 13 23:49:02.497805 containerd[1461]: time="2025-05-13T23:49:02.497763096Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:02.500916 containerd[1461]: time="2025-05-13T23:49:02.500875456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:02.502158 containerd[1461]: time="2025-05-13T23:49:02.502105496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.53912352s" May 13 23:49:02.502158 containerd[1461]: time="2025-05-13T23:49:02.502147896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 23:49:02.502900 containerd[1461]: time="2025-05-13T23:49:02.502870376Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:49:03.485155 containerd[1461]: time="2025-05-13T23:49:03.485101176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:03.486723 containerd[1461]: time="2025-05-13T23:49:03.486508256Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 13 23:49:03.490286 containerd[1461]: time="2025-05-13T23:49:03.489996176Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:03.492282 containerd[1461]: time="2025-05-13T23:49:03.492198736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:03.493428 containerd[1461]: time="2025-05-13T23:49:03.493391096Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 990.4706ms" May 13 23:49:03.493783 containerd[1461]: time="2025-05-13T23:49:03.493527896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 23:49:03.494393 containerd[1461]: time="2025-05-13T23:49:03.494359136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:49:04.497102 containerd[1461]: time="2025-05-13T23:49:04.497031976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:04.498616 containerd[1461]: time="2025-05-13T23:49:04.498563336Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 13 23:49:04.499587 containerd[1461]: time="2025-05-13T23:49:04.499526056Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:04.501897 containerd[1461]: time="2025-05-13T23:49:04.501853256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:04.503629 containerd[1461]: time="2025-05-13T23:49:04.503596376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.00919048s" May 13 23:49:04.503689 containerd[1461]: time="2025-05-13T23:49:04.503630616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 23:49:04.504234 containerd[1461]: time="2025-05-13T23:49:04.504045416Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:49:05.395093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912342676.mount: Deactivated successfully. May 13 23:49:05.397015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:49:05.399440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:49:05.524940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:05.529015 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:49:05.570359 kubelet[1966]: E0513 23:49:05.570300 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:49:05.573358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:49:05.573492 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:49:05.575436 systemd[1]: kubelet.service: Consumed 140ms CPU time, 97.1M memory peak. May 13 23:49:05.875077 containerd[1461]: time="2025-05-13T23:49:05.874721376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:05.876029 containerd[1461]: time="2025-05-13T23:49:05.875975816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 13 23:49:05.876832 containerd[1461]: time="2025-05-13T23:49:05.876783216Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:05.879330 containerd[1461]: time="2025-05-13T23:49:05.879243136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:05.879684 containerd[1461]: time="2025-05-13T23:49:05.879646136Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.37556936s" May 13 23:49:05.879684 containerd[1461]: time="2025-05-13T23:49:05.879679936Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 23:49:05.880227 containerd[1461]: time="2025-05-13T23:49:05.880184296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:49:06.362985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351648686.mount: Deactivated successfully. May 13 23:49:07.059168 containerd[1461]: time="2025-05-13T23:49:07.059112896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.060264 containerd[1461]: time="2025-05-13T23:49:07.060007256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 23:49:07.061438 containerd[1461]: time="2025-05-13T23:49:07.061401096Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.064485 containerd[1461]: time="2025-05-13T23:49:07.064448016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.065572 containerd[1461]: time="2025-05-13T23:49:07.065462376Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.18523828s" May 13 23:49:07.065572 containerd[1461]: time="2025-05-13T23:49:07.065505256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:49:07.065938 containerd[1461]: time="2025-05-13T23:49:07.065907656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:49:07.532542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824872082.mount: Deactivated successfully. May 13 23:49:07.538215 containerd[1461]: time="2025-05-13T23:49:07.537480216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:49:07.538215 containerd[1461]: time="2025-05-13T23:49:07.538167496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:49:07.538845 containerd[1461]: time="2025-05-13T23:49:07.538818416Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:49:07.540857 containerd[1461]: time="2025-05-13T23:49:07.540819816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:49:07.542026 containerd[1461]: time="2025-05-13T23:49:07.541660536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.71964ms" May 13 23:49:07.542132 containerd[1461]: time="2025-05-13T23:49:07.542116696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:49:07.542872 containerd[1461]: time="2025-05-13T23:49:07.542854376Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:49:08.069100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764602024.mount: Deactivated successfully. May 13 23:49:09.504896 containerd[1461]: time="2025-05-13T23:49:09.504839576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.506570 containerd[1461]: time="2025-05-13T23:49:09.506520496Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 13 23:49:09.507249 containerd[1461]: time="2025-05-13T23:49:09.507215496Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.510263 containerd[1461]: time="2025-05-13T23:49:09.510223176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.511775 containerd[1461]: time="2025-05-13T23:49:09.511547936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.96855864s" May 13 23:49:09.511775 containerd[1461]: time="2025-05-13T23:49:09.511585976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 23:49:13.382190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:13.382485 systemd[1]: kubelet.service: Consumed 140ms CPU time, 97.1M memory peak. May 13 23:49:13.384660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:49:13.412017 systemd[1]: Reload requested from client PID 2107 ('systemctl') (unit session-7.scope)... May 13 23:49:13.412034 systemd[1]: Reloading... May 13 23:49:13.495463 zram_generator::config[2149]: No configuration found. May 13 23:49:13.602089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:49:13.675818 systemd[1]: Reloading finished in 263 ms. May 13 23:49:13.728804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:49:13.728879 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:49:13.729144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:13.729194 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.3M memory peak. May 13 23:49:13.731460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:49:13.838198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:13.842626 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:49:13.877429 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:49:13.877429 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:49:13.877429 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:49:13.877796 kubelet[2197]: I0513 23:49:13.877661 2197 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:49:14.456365 kubelet[2197]: I0513 23:49:14.456319 2197 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:49:14.456365 kubelet[2197]: I0513 23:49:14.456352 2197 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:49:14.456623 kubelet[2197]: I0513 23:49:14.456597 2197 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:49:14.490847 kubelet[2197]: E0513 23:49:14.490803 2197 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:14.491702 kubelet[2197]: I0513 23:49:14.491675 2197 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:49:14.509015 kubelet[2197]: I0513 23:49:14.508978 2197 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:49:14.513182 kubelet[2197]: I0513 23:49:14.513140 2197 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:49:14.576791 kubelet[2197]: I0513 23:49:14.576743 2197 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:49:14.577035 kubelet[2197]: I0513 23:49:14.576946 2197 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:49:14.577185 kubelet[2197]: I0513 23:49:14.576979 2197 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:49:14.577370 kubelet[2197]: I0513 23:49:14.577355 2197 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:49:14.577370 kubelet[2197]: I0513 23:49:14.577371 2197 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:49:14.577598 kubelet[2197]: I0513 23:49:14.577572 2197 state_mem.go:36] "Initialized new in-memory state store" May 13 23:49:14.583380 kubelet[2197]: I0513 23:49:14.583338 2197 kubelet.go:408] "Attempting to sync node with API server" May 13 23:49:14.583501 kubelet[2197]: I0513 23:49:14.583383 2197 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:49:14.583580 kubelet[2197]: I0513 23:49:14.583557 2197 kubelet.go:314] "Adding apiserver pod source" May 13 23:49:14.583580 kubelet[2197]: I0513 23:49:14.583573 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:49:14.586931 kubelet[2197]: W0513 23:49:14.586791 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:14.586931 kubelet[2197]: E0513 23:49:14.586863 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:14.587465 kubelet[2197]: W0513 23:49:14.587395 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:14.587465 kubelet[2197]: E0513 23:49:14.587445 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:14.590609 kubelet[2197]: I0513 23:49:14.590572 2197 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:49:14.592699 kubelet[2197]: I0513 23:49:14.592665 2197 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:49:14.596035 kubelet[2197]: W0513 23:49:14.596001 2197 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:49:14.596936 kubelet[2197]: I0513 23:49:14.596912 2197 server.go:1269] "Started kubelet" May 13 23:49:14.597177 kubelet[2197]: I0513 23:49:14.597003 2197 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:49:14.597669 kubelet[2197]: I0513 23:49:14.597606 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:49:14.597971 kubelet[2197]: I0513 23:49:14.597946 2197 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:49:14.598864 kubelet[2197]: I0513 23:49:14.598354 2197 server.go:460] "Adding debug handlers to kubelet server" May 13 23:49:14.600472 kubelet[2197]: I0513 23:49:14.600448 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:49:14.600645 kubelet[2197]: I0513 23:49:14.600583 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:49:14.602024 kubelet[2197]: E0513 23:49:14.601940 2197 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:49:14.602260 kubelet[2197]: E0513 23:49:14.602176 2197 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:49:14.602430 kubelet[2197]: I0513 23:49:14.602374 2197 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:49:14.602471 kubelet[2197]: I0513 23:49:14.602451 2197 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:49:14.603163 kubelet[2197]: I0513 23:49:14.602506 2197 reconciler.go:26] "Reconciler: start to sync state" May 13 23:49:14.603163 kubelet[2197]: W0513 23:49:14.602844 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:14.603163 kubelet[2197]: E0513 23:49:14.602894 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:14.603163 kubelet[2197]: I0513 23:49:14.602944 2197 factory.go:221] Registration of the systemd container factory successfully May 13 23:49:14.603163 kubelet[2197]: I0513 23:49:14.603039 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:49:14.603163 kubelet[2197]: E0513 23:49:14.603090 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" May 13 23:49:14.603369 kubelet[2197]: E0513 23:49:14.601721 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b17f95bf278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:49:14.596881016 +0000 UTC m=+0.750976881,LastTimestamp:2025-05-13 23:49:14.596881016 +0000 UTC m=+0.750976881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:49:14.604443 kubelet[2197]: I0513 23:49:14.604418 2197 factory.go:221] Registration of the containerd container factory successfully May 13 23:49:14.616773 kubelet[2197]: I0513 23:49:14.616748 2197 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:49:14.617173 kubelet[2197]: I0513 23:49:14.616906 2197 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:49:14.617173 kubelet[2197]: I0513 23:49:14.616926 2197 state_mem.go:36] "Initialized new in-memory state store" May 13 23:49:14.619082 kubelet[2197]: I0513 23:49:14.619058 2197 policy_none.go:49] "None policy: Start" May 13 23:49:14.620495 kubelet[2197]: I0513 23:49:14.620442 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:49:14.620882 kubelet[2197]: I0513 23:49:14.620862 2197 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:49:14.620956 kubelet[2197]: I0513 23:49:14.620894 2197 state_mem.go:35] "Initializing new in-memory state store" May 13 23:49:14.622091 kubelet[2197]: I0513 23:49:14.622058 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:49:14.622091 kubelet[2197]: I0513 23:49:14.622087 2197 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:49:14.622401 kubelet[2197]: I0513 23:49:14.622114 2197 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:49:14.622401 kubelet[2197]: E0513 23:49:14.622167 2197 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:49:14.623089 kubelet[2197]: W0513 23:49:14.623017 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:14.623165 kubelet[2197]: E0513 23:49:14.623085 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:14.627317 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:49:14.649689 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:49:14.652616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:49:14.662175 kubelet[2197]: I0513 23:49:14.662137 2197 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:49:14.662402 kubelet[2197]: I0513 23:49:14.662378 2197 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:49:14.662604 kubelet[2197]: I0513 23:49:14.662397 2197 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:49:14.663212 kubelet[2197]: I0513 23:49:14.663187 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:49:14.664210 kubelet[2197]: E0513 23:49:14.664183 2197 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:49:14.734867 systemd[1]: Created slice kubepods-burstable-pod2165538040078efc46f3b69ef721391c.slice - libcontainer container kubepods-burstable-pod2165538040078efc46f3b69ef721391c.slice. May 13 23:49:14.762384 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 23:49:14.763815 kubelet[2197]: I0513 23:49:14.763753 2197 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:49:14.764260 kubelet[2197]: E0513 23:49:14.764212 2197 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 13 23:49:14.768199 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 23:49:14.808909 kubelet[2197]: E0513 23:49:14.808849 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" May 13 23:49:14.909366 kubelet[2197]: I0513 23:49:14.909294 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:14.909366 kubelet[2197]: I0513 23:49:14.909338 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:14.909366 kubelet[2197]: I0513 23:49:14.909362 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:14.909366 kubelet[2197]: I0513 23:49:14.909377 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:14.909366 kubelet[2197]: I0513 23:49:14.909394 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:49:14.909838 kubelet[2197]: I0513 23:49:14.909410 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:14.909838 kubelet[2197]: I0513 23:49:14.909425 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:14.909838 kubelet[2197]: I0513 23:49:14.909441 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:14.909838 kubelet[2197]: I0513 23:49:14.909467 2197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:14.965592 kubelet[2197]: I0513 23:49:14.965564 2197 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:49:14.965950 kubelet[2197]: E0513 23:49:14.965908 2197 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 13 23:49:15.060781 containerd[1461]: time="2025-05-13T23:49:15.060471016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2165538040078efc46f3b69ef721391c,Namespace:kube-system,Attempt:0,}" May 13 23:49:15.066155 containerd[1461]: time="2025-05-13T23:49:15.066112736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 23:49:15.071089 containerd[1461]: time="2025-05-13T23:49:15.071045856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 23:49:15.131480 containerd[1461]: time="2025-05-13T23:49:15.131436056Z" level=info msg="connecting to shim fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f" address="unix:///run/containerd/s/2e6be13bd329886d3c5bca010e0a4bf0838932af347d0ba534815501d79cfd8b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:15.147846 containerd[1461]: time="2025-05-13T23:49:15.147802616Z" level=info msg="connecting to shim 6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24" address="unix:///run/containerd/s/78f2416c1efd576479aacb55bd43c905b289df882317e552e17764458d41100f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:15.153088 containerd[1461]: time="2025-05-13T23:49:15.153040056Z" level=info msg="connecting to shim 7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5" address="unix:///run/containerd/s/bf7f40b5084646394ea652fe28f5f1056c8f8013ad10d47869d2cd36488c11b7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:15.165579 systemd[1]: Started cri-containerd-fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f.scope - libcontainer container fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f. May 13 23:49:15.181478 systemd[1]: Started cri-containerd-6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24.scope - libcontainer container 6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24. May 13 23:49:15.183246 systemd[1]: Started cri-containerd-7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5.scope - libcontainer container 7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5. May 13 23:49:15.210370 kubelet[2197]: E0513 23:49:15.209697 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" May 13 23:49:15.221016 containerd[1461]: time="2025-05-13T23:49:15.220970576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2165538040078efc46f3b69ef721391c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f\"" May 13 23:49:15.226629 containerd[1461]: time="2025-05-13T23:49:15.226586096Z" level=info msg="CreateContainer within sandbox \"fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:49:15.228224 containerd[1461]: time="2025-05-13T23:49:15.228173776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5\"" May 13 23:49:15.232660 containerd[1461]: time="2025-05-13T23:49:15.232500936Z" level=info msg="CreateContainer within sandbox \"7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:49:15.236323 containerd[1461]: time="2025-05-13T23:49:15.236239296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24\"" May 13 23:49:15.239133 containerd[1461]: time="2025-05-13T23:49:15.239099176Z" level=info msg="Container 203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:15.239563 containerd[1461]: time="2025-05-13T23:49:15.239520816Z" level=info msg="CreateContainer within sandbox \"6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:49:15.253471 containerd[1461]: time="2025-05-13T23:49:15.253424776Z" level=info msg="CreateContainer within sandbox \"fb9d9631f8ceae5d3b17742b31d407af196a33265c5dc96fe2090d974bfb841f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d\"" May 13 23:49:15.254410 containerd[1461]: time="2025-05-13T23:49:15.254380816Z" level=info msg="StartContainer for \"203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d\"" May 13 23:49:15.254621 containerd[1461]: time="2025-05-13T23:49:15.254410736Z" level=info msg="Container 9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:15.256640 containerd[1461]: time="2025-05-13T23:49:15.256569936Z" level=info msg="connecting to shim 203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d" address="unix:///run/containerd/s/2e6be13bd329886d3c5bca010e0a4bf0838932af347d0ba534815501d79cfd8b" protocol=ttrpc version=3 May 13 23:49:15.259655 containerd[1461]: time="2025-05-13T23:49:15.259608096Z" level=info msg="Container 3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:15.264934 containerd[1461]: time="2025-05-13T23:49:15.264824776Z" level=info msg="CreateContainer within sandbox \"7e70fd6a0e79603bf6b68aebf6f9b1dab5ef5176cdad66fc7480d8d9f1849aa5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4\"" May 13 23:49:15.265443 containerd[1461]: time="2025-05-13T23:49:15.265410456Z" level=info msg="StartContainer for \"9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4\"" May 13 23:49:15.266698 containerd[1461]: time="2025-05-13T23:49:15.266666896Z" level=info msg="connecting to shim 9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4" address="unix:///run/containerd/s/bf7f40b5084646394ea652fe28f5f1056c8f8013ad10d47869d2cd36488c11b7" protocol=ttrpc version=3 May 13 23:49:15.269358 containerd[1461]: time="2025-05-13T23:49:15.269319616Z" level=info msg="CreateContainer within sandbox \"6ca3f3ea90d8e7668d31ef3fb176ca010b17ff7e932b225ebd7c26c35de20b24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c\"" May 13 23:49:15.269854 containerd[1461]: time="2025-05-13T23:49:15.269826816Z" level=info msg="StartContainer for \"3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c\"" May 13 23:49:15.270954 containerd[1461]: time="2025-05-13T23:49:15.270925976Z" level=info msg="connecting to shim 3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c" address="unix:///run/containerd/s/78f2416c1efd576479aacb55bd43c905b289df882317e552e17764458d41100f" protocol=ttrpc version=3 May 13 23:49:15.282501 systemd[1]: Started cri-containerd-203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d.scope - libcontainer container 203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d. May 13 23:49:15.286397 systemd[1]: Started cri-containerd-9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4.scope - libcontainer container 9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4. May 13 23:49:15.291526 systemd[1]: Started cri-containerd-3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c.scope - libcontainer container 3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c. May 13 23:49:15.351060 containerd[1461]: time="2025-05-13T23:49:15.350816976Z" level=info msg="StartContainer for \"203603c62e19f2223a54540c214531fe2f335cda4415f668ad73c85ab683c60d\" returns successfully" May 13 23:49:15.369284 kubelet[2197]: I0513 23:49:15.369216 2197 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:49:15.369575 kubelet[2197]: E0513 23:49:15.369544 2197 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 13 23:49:15.370537 containerd[1461]: time="2025-05-13T23:49:15.370501416Z" level=info msg="StartContainer for \"9a4b4aa1f3e6c0fa036b492d9e7605b99894d42827b0448d8a7b9e59ebdb23b4\" returns successfully" May 13 23:49:15.377987 containerd[1461]: time="2025-05-13T23:49:15.377949496Z" level=info msg="StartContainer for \"3bfeb99b377ee7aac544626058044a016289e843cffd9c17af87d0c61766202c\" returns successfully" May 13 23:49:15.405204 kubelet[2197]: W0513 23:49:15.405061 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:15.406928 kubelet[2197]: E0513 23:49:15.406727 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:15.453000 kubelet[2197]: W0513 23:49:15.452617 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:15.453000 kubelet[2197]: E0513 23:49:15.452671 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:15.474597 kubelet[2197]: E0513 23:49:15.474481 2197 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b17f95bf278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:49:14.596881016 +0000 UTC m=+0.750976881,LastTimestamp:2025-05-13 23:49:14.596881016 +0000 UTC m=+0.750976881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:49:15.488115 kubelet[2197]: W0513 23:49:15.487987 2197 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 13 23:49:15.488115 kubelet[2197]: E0513 23:49:15.488059 2197 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 13 23:49:16.171634 kubelet[2197]: I0513 23:49:16.171457 2197 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:49:17.366139 kubelet[2197]: E0513 23:49:17.366085 2197 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:49:17.481029 kubelet[2197]: I0513 23:49:17.480981 2197 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:49:17.481029 kubelet[2197]: E0513 23:49:17.481025 2197 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:49:17.588687 kubelet[2197]: I0513 23:49:17.588567 2197 apiserver.go:52] "Watching apiserver" May 13 23:49:17.602684 kubelet[2197]: I0513 23:49:17.602648 2197 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:49:19.832961 systemd[1]: Reload requested from client PID 2466 ('systemctl') (unit session-7.scope)... May 13 23:49:19.832976 systemd[1]: Reloading... May 13 23:49:19.910096 zram_generator::config[2510]: No configuration found. May 13 23:49:20.013615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:49:20.108716 systemd[1]: Reloading finished in 275 ms. May 13 23:49:20.135312 kubelet[2197]: I0513 23:49:20.133661 2197 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:49:20.133864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:49:20.144570 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:49:20.144868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:20.144950 systemd[1]: kubelet.service: Consumed 1.056s CPU time, 119.6M memory peak. May 13 23:49:20.147520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:49:20.283802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:49:20.289009 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:49:20.346174 kubelet[2552]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:49:20.346174 kubelet[2552]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:49:20.346174 kubelet[2552]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:49:20.346606 kubelet[2552]: I0513 23:49:20.346225 2552 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:49:20.351925 kubelet[2552]: I0513 23:49:20.351869 2552 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:49:20.351925 kubelet[2552]: I0513 23:49:20.351916 2552 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:49:20.352165 kubelet[2552]: I0513 23:49:20.352135 2552 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:49:20.353594 kubelet[2552]: I0513 23:49:20.353569 2552 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:49:20.356332 kubelet[2552]: I0513 23:49:20.356158 2552 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:49:20.360359 kubelet[2552]: I0513 23:49:20.360208 2552 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:49:20.365227 kubelet[2552]: I0513 23:49:20.363134 2552 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:49:20.365227 kubelet[2552]: I0513 23:49:20.363303 2552 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:49:20.365227 kubelet[2552]: I0513 23:49:20.363411 2552 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:49:20.365227 kubelet[2552]: I0513 23:49:20.363432 2552 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363694 2552 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363704 2552 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363738 2552 state_mem.go:36] "Initialized new in-memory state store" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363842 2552 kubelet.go:408] "Attempting to sync node with API server" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363854 2552 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363882 2552 kubelet.go:314] "Adding apiserver pod source" May 13 23:49:20.365481 kubelet[2552]: I0513 23:49:20.363898 2552 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:49:20.366290 kubelet[2552]: I0513 23:49:20.366108 2552 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:49:20.371439 kubelet[2552]: I0513 23:49:20.366816 2552 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:49:20.371439 kubelet[2552]: I0513 23:49:20.367642 2552 server.go:1269] "Started kubelet" May 13 23:49:20.371439 kubelet[2552]: I0513 23:49:20.367859 2552 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:49:20.371439 kubelet[2552]: I0513 23:49:20.367959 2552 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:49:20.371439 kubelet[2552]: I0513 23:49:20.369183 2552 server.go:460] "Adding debug handlers to kubelet server" May 13 23:49:20.372041 kubelet[2552]: I0513 23:49:20.372016 2552 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:49:20.372806 kubelet[2552]: I0513 23:49:20.372770 2552 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:49:20.379309 kubelet[2552]: I0513 23:49:20.377617 2552 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:49:20.379309 kubelet[2552]: I0513 23:49:20.379124 2552 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:49:20.379309 kubelet[2552]: I0513 23:49:20.379294 2552 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:49:20.379473 kubelet[2552]: I0513 23:49:20.379452 2552 reconciler.go:26] "Reconciler: start to sync state" May 13 23:49:20.382691 kubelet[2552]: I0513 23:49:20.381834 2552 factory.go:221] Registration of the systemd container factory successfully May 13 23:49:20.382691 kubelet[2552]: I0513 23:49:20.381946 2552 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:49:20.382691 kubelet[2552]: E0513 23:49:20.382469 2552 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:49:20.384292 kubelet[2552]: E0513 23:49:20.384153 2552 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:49:20.386846 kubelet[2552]: I0513 23:49:20.386818 2552 factory.go:221] Registration of the containerd container factory successfully May 13 23:49:20.393331 kubelet[2552]: I0513 23:49:20.393262 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:49:20.394855 kubelet[2552]: I0513 23:49:20.394825 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:49:20.394855 kubelet[2552]: I0513 23:49:20.394852 2552 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:49:20.394987 kubelet[2552]: I0513 23:49:20.394882 2552 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:49:20.394987 kubelet[2552]: E0513 23:49:20.394923 2552 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:49:20.424124 kubelet[2552]: I0513 23:49:20.424097 2552 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:49:20.424333 kubelet[2552]: I0513 23:49:20.424316 2552 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:49:20.424407 kubelet[2552]: I0513 23:49:20.424397 2552 state_mem.go:36] "Initialized new in-memory state store" May 13 23:49:20.424602 kubelet[2552]: I0513 23:49:20.424585 2552 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:49:20.424681 kubelet[2552]: I0513 23:49:20.424657 2552 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:49:20.424728 kubelet[2552]: I0513 23:49:20.424720 2552 policy_none.go:49] "None policy: Start" May 13 23:49:20.425587 kubelet[2552]: I0513 23:49:20.425562 2552 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:49:20.425587 kubelet[2552]: I0513 23:49:20.425596 2552 state_mem.go:35] "Initializing new in-memory state store" May 13 23:49:20.425765 kubelet[2552]: I0513 23:49:20.425748 2552 state_mem.go:75] "Updated machine memory state" May 13 23:49:20.430301 kubelet[2552]: I0513 23:49:20.430258 2552 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:49:20.430652 kubelet[2552]: I0513 23:49:20.430471 2552 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:49:20.430652 kubelet[2552]: I0513 23:49:20.430489 2552 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:49:20.430736 kubelet[2552]: I0513 23:49:20.430677 2552 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:49:20.500748 kubelet[2552]: E0513 23:49:20.500712 2552 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:49:20.533533 kubelet[2552]: I0513 23:49:20.533502 2552 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:49:20.543145 kubelet[2552]: I0513 23:49:20.542479 2552 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 23:49:20.543145 kubelet[2552]: I0513 23:49:20.542554 2552 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:49:20.580079 kubelet[2552]: I0513 23:49:20.580032 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:20.580192 kubelet[2552]: I0513 23:49:20.580101 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:20.580192 kubelet[2552]: I0513 23:49:20.580173 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:20.580267 kubelet[2552]: I0513 23:49:20.580196 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:20.580267 kubelet[2552]: I0513 23:49:20.580250 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:49:20.580326 kubelet[2552]: I0513 23:49:20.580266 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:20.580326 kubelet[2552]: I0513 23:49:20.580316 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:20.580367 kubelet[2552]: I0513 23:49:20.580332 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2165538040078efc46f3b69ef721391c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2165538040078efc46f3b69ef721391c\") " pod="kube-system/kube-apiserver-localhost" May 13 23:49:20.581020 kubelet[2552]: I0513 23:49:20.580994 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:49:20.850137 sudo[2584]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:49:20.850488 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:49:21.310379 sudo[2584]: pam_unix(sudo:session): session closed for user root May 13 23:49:21.365017 kubelet[2552]: I0513 23:49:21.364962 2552 apiserver.go:52] "Watching apiserver" May 13 23:49:21.380233 kubelet[2552]: I0513 23:49:21.380190 2552 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:49:21.416473 kubelet[2552]: E0513 23:49:21.416407 2552 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:49:21.432387 kubelet[2552]: I0513 23:49:21.432329 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.432314176 podStartE2EDuration="1.432314176s" podCreationTimestamp="2025-05-13 23:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:21.431855696 +0000 UTC m=+1.139375961" watchObservedRunningTime="2025-05-13 23:49:21.432314176 +0000 UTC m=+1.139834481" May 13 23:49:21.456632 kubelet[2552]: I0513 23:49:21.456574 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.456555416 podStartE2EDuration="2.456555416s" podCreationTimestamp="2025-05-13 23:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:21.456458256 +0000 UTC m=+1.163978561" watchObservedRunningTime="2025-05-13 23:49:21.456555416 +0000 UTC m=+1.164075721" May 13 23:49:21.456961 kubelet[2552]: I0513 23:49:21.456685 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4566813760000001 podStartE2EDuration="1.456681376s" podCreationTimestamp="2025-05-13 23:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:21.448071136 +0000 UTC m=+1.155591481" watchObservedRunningTime="2025-05-13 23:49:21.456681376 +0000 UTC m=+1.164201681" May 13 23:49:23.271666 sudo[1663]: pam_unix(sudo:session): session closed for user root May 13 23:49:23.273442 sshd[1662]: Connection closed by 10.0.0.1 port 46794 May 13 23:49:23.274065 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 13 23:49:23.277743 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:46794.service: Deactivated successfully. May 13 23:49:23.279715 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:49:23.279898 systemd[1]: session-7.scope: Consumed 6.509s CPU time, 262.1M memory peak. May 13 23:49:23.280877 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. May 13 23:49:23.281762 systemd-logind[1450]: Removed session 7. May 13 23:49:25.169156 kubelet[2552]: I0513 23:49:25.169012 2552 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:49:25.169667 kubelet[2552]: I0513 23:49:25.169511 2552 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:49:25.169701 containerd[1461]: time="2025-05-13T23:49:25.169333293Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:49:25.579507 systemd[1]: Created slice kubepods-besteffort-pod106af599_63b6_4fb8_b918_12d7880d4cbf.slice - libcontainer container kubepods-besteffort-pod106af599_63b6_4fb8_b918_12d7880d4cbf.slice. May 13 23:49:25.593337 systemd[1]: Created slice kubepods-burstable-pod84930ac2_c2b2_4a58_a8f2_948cf6a63376.slice - libcontainer container kubepods-burstable-pod84930ac2_c2b2_4a58_a8f2_948cf6a63376.slice. May 13 23:49:25.616618 kubelet[2552]: I0513 23:49:25.616577 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-config-path\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617403 kubelet[2552]: I0513 23:49:25.617365 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrlg\" (UniqueName: \"kubernetes.io/projected/106af599-63b6-4fb8-b918-12d7880d4cbf-kube-api-access-tbrlg\") pod \"kube-proxy-kq789\" (UID: \"106af599-63b6-4fb8-b918-12d7880d4cbf\") " pod="kube-system/kube-proxy-kq789" May 13 23:49:25.617470 kubelet[2552]: I0513 23:49:25.617407 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-run\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617470 kubelet[2552]: I0513 23:49:25.617426 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84930ac2-c2b2-4a58-a8f2-948cf6a63376-clustermesh-secrets\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617470 kubelet[2552]: I0513 23:49:25.617443 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-etc-cni-netd\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617470 kubelet[2552]: I0513 23:49:25.617459 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hostproc\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617577 kubelet[2552]: I0513 23:49:25.617474 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cni-path\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617577 kubelet[2552]: I0513 23:49:25.617508 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-lib-modules\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617577 kubelet[2552]: I0513 23:49:25.617541 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-xtables-lock\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617577 kubelet[2552]: I0513 23:49:25.617557 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/106af599-63b6-4fb8-b918-12d7880d4cbf-kube-proxy\") pod \"kube-proxy-kq789\" (UID: \"106af599-63b6-4fb8-b918-12d7880d4cbf\") " pod="kube-system/kube-proxy-kq789" May 13 23:49:25.617577 kubelet[2552]: I0513 23:49:25.617573 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-bpf-maps\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617594 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-net\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617610 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-kernel\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617625 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/106af599-63b6-4fb8-b918-12d7880d4cbf-xtables-lock\") pod \"kube-proxy-kq789\" (UID: \"106af599-63b6-4fb8-b918-12d7880d4cbf\") " pod="kube-system/kube-proxy-kq789" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617641 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-cgroup\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617675 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hubble-tls\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.617736 kubelet[2552]: I0513 23:49:25.617733 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/106af599-63b6-4fb8-b918-12d7880d4cbf-lib-modules\") pod \"kube-proxy-kq789\" (UID: \"106af599-63b6-4fb8-b918-12d7880d4cbf\") " pod="kube-system/kube-proxy-kq789" May 13 23:49:25.617857 kubelet[2552]: I0513 23:49:25.617757 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr5s\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s\") pod \"cilium-cp9p5\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " pod="kube-system/cilium-cp9p5" May 13 23:49:25.730478 kubelet[2552]: E0513 23:49:25.730432 2552 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:49:25.730478 kubelet[2552]: E0513 23:49:25.730472 2552 projected.go:194] Error preparing data for projected volume kube-api-access-tbrlg for pod kube-system/kube-proxy-kq789: configmap "kube-root-ca.crt" not found May 13 23:49:25.730696 kubelet[2552]: E0513 23:49:25.730435 2552 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:49:25.730696 kubelet[2552]: E0513 23:49:25.730558 2552 projected.go:194] Error preparing data for projected volume kube-api-access-nhr5s for pod kube-system/cilium-cp9p5: configmap "kube-root-ca.crt" not found May 13 23:49:25.730696 kubelet[2552]: E0513 23:49:25.730539 2552 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106af599-63b6-4fb8-b918-12d7880d4cbf-kube-api-access-tbrlg podName:106af599-63b6-4fb8-b918-12d7880d4cbf nodeName:}" failed. No retries permitted until 2025-05-13 23:49:26.230513774 +0000 UTC m=+5.938034079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbrlg" (UniqueName: "kubernetes.io/projected/106af599-63b6-4fb8-b918-12d7880d4cbf-kube-api-access-tbrlg") pod "kube-proxy-kq789" (UID: "106af599-63b6-4fb8-b918-12d7880d4cbf") : configmap "kube-root-ca.crt" not found May 13 23:49:25.730696 kubelet[2552]: E0513 23:49:25.730607 2552 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s podName:84930ac2-c2b2-4a58-a8f2-948cf6a63376 nodeName:}" failed. No retries permitted until 2025-05-13 23:49:26.230594656 +0000 UTC m=+5.938114961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nhr5s" (UniqueName: "kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s") pod "cilium-cp9p5" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376") : configmap "kube-root-ca.crt" not found May 13 23:49:26.266786 systemd[1]: Created slice kubepods-besteffort-pod0951d21a_8460_4eb7_8698_25535eaa5485.slice - libcontainer container kubepods-besteffort-pod0951d21a_8460_4eb7_8698_25535eaa5485.slice. May 13 23:49:26.324396 kubelet[2552]: I0513 23:49:26.324326 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gj6\" (UniqueName: \"kubernetes.io/projected/0951d21a-8460-4eb7-8698-25535eaa5485-kube-api-access-d6gj6\") pod \"cilium-operator-5d85765b45-vc7dg\" (UID: \"0951d21a-8460-4eb7-8698-25535eaa5485\") " pod="kube-system/cilium-operator-5d85765b45-vc7dg" May 13 23:49:26.324396 kubelet[2552]: I0513 23:49:26.324388 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0951d21a-8460-4eb7-8698-25535eaa5485-cilium-config-path\") pod \"cilium-operator-5d85765b45-vc7dg\" (UID: \"0951d21a-8460-4eb7-8698-25535eaa5485\") " pod="kube-system/cilium-operator-5d85765b45-vc7dg" May 13 23:49:26.489624 containerd[1461]: time="2025-05-13T23:49:26.489572034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kq789,Uid:106af599-63b6-4fb8-b918-12d7880d4cbf,Namespace:kube-system,Attempt:0,}" May 13 23:49:26.501214 containerd[1461]: time="2025-05-13T23:49:26.501170715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cp9p5,Uid:84930ac2-c2b2-4a58-a8f2-948cf6a63376,Namespace:kube-system,Attempt:0,}" May 13 23:49:26.510409 containerd[1461]: time="2025-05-13T23:49:26.510362185Z" level=info msg="connecting to shim d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754" address="unix:///run/containerd/s/b375795fb6b8725dd63850614fe75eaaa8ac95b6ce83267152f14806704ce8dd" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:26.527615 containerd[1461]: time="2025-05-13T23:49:26.527325577Z" level=info msg="connecting to shim 5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:26.535525 systemd[1]: Started cri-containerd-d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754.scope - libcontainer container d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754. May 13 23:49:26.561486 systemd[1]: Started cri-containerd-5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213.scope - libcontainer container 5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213. May 13 23:49:26.564660 containerd[1461]: time="2025-05-13T23:49:26.564610949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kq789,Uid:106af599-63b6-4fb8-b918-12d7880d4cbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754\"" May 13 23:49:26.571365 containerd[1461]: time="2025-05-13T23:49:26.571323968Z" level=info msg="CreateContainer within sandbox \"d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:49:26.573256 containerd[1461]: time="2025-05-13T23:49:26.573207127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vc7dg,Uid:0951d21a-8460-4eb7-8698-25535eaa5485,Namespace:kube-system,Attempt:0,}" May 13 23:49:26.594870 containerd[1461]: time="2025-05-13T23:49:26.594799094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cp9p5,Uid:84930ac2-c2b2-4a58-a8f2-948cf6a63376,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\"" May 13 23:49:26.597608 containerd[1461]: time="2025-05-13T23:49:26.597520511Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:49:26.602457 containerd[1461]: time="2025-05-13T23:49:26.602378211Z" level=info msg="Container 728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:26.607397 containerd[1461]: time="2025-05-13T23:49:26.607320554Z" level=info msg="connecting to shim 392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616" address="unix:///run/containerd/s/178cf2c25cfd6125f7864848ecd2c768950817415fab070ca1ed23632464a3e3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:26.620774 containerd[1461]: time="2025-05-13T23:49:26.620711471Z" level=info msg="CreateContainer within sandbox \"d1e4bf0985cf74cc903792f66457b79e215905ba70e36325beadd3e27eacd754\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b\"" May 13 23:49:26.621921 containerd[1461]: time="2025-05-13T23:49:26.621891696Z" level=info msg="StartContainer for \"728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b\"" May 13 23:49:26.624253 containerd[1461]: time="2025-05-13T23:49:26.624180983Z" level=info msg="connecting to shim 728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b" address="unix:///run/containerd/s/b375795fb6b8725dd63850614fe75eaaa8ac95b6ce83267152f14806704ce8dd" protocol=ttrpc version=3 May 13 23:49:26.635480 systemd[1]: Started cri-containerd-392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616.scope - libcontainer container 392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616. May 13 23:49:26.640346 systemd[1]: Started cri-containerd-728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b.scope - libcontainer container 728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b. May 13 23:49:26.678161 containerd[1461]: time="2025-05-13T23:49:26.677991978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vc7dg,Uid:0951d21a-8460-4eb7-8698-25535eaa5485,Namespace:kube-system,Attempt:0,} returns sandbox id \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\"" May 13 23:49:26.692033 containerd[1461]: time="2025-05-13T23:49:26.691996108Z" level=info msg="StartContainer for \"728e0f1f616cd95eb4d83d55bac03f83a988f76f35a932e19290c2b6664d726b\" returns successfully" May 13 23:49:27.434865 kubelet[2552]: I0513 23:49:27.434797 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kq789" podStartSLOduration=2.434778575 podStartE2EDuration="2.434778575s" podCreationTimestamp="2025-05-13 23:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:27.434491009 +0000 UTC m=+7.142011314" watchObservedRunningTime="2025-05-13 23:49:27.434778575 +0000 UTC m=+7.142298880" May 13 23:49:33.555807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870710507.mount: Deactivated successfully. May 13 23:49:36.677688 containerd[1461]: time="2025-05-13T23:49:36.677632840Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:36.678590 containerd[1461]: time="2025-05-13T23:49:36.678410849Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 23:49:36.679351 containerd[1461]: time="2025-05-13T23:49:36.679316298Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:36.680898 containerd[1461]: time="2025-05-13T23:49:36.680768074Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.083202162s" May 13 23:49:36.680898 containerd[1461]: time="2025-05-13T23:49:36.680805155Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 23:49:36.684130 containerd[1461]: time="2025-05-13T23:49:36.684095990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:49:36.687666 containerd[1461]: time="2025-05-13T23:49:36.687558788Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:49:36.709962 containerd[1461]: time="2025-05-13T23:49:36.709763909Z" level=info msg="Container 4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:36.711363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776367885.mount: Deactivated successfully. May 13 23:49:36.716000 containerd[1461]: time="2025-05-13T23:49:36.715943176Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\"" May 13 23:49:36.716462 containerd[1461]: time="2025-05-13T23:49:36.716431062Z" level=info msg="StartContainer for \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\"" May 13 23:49:36.717513 containerd[1461]: time="2025-05-13T23:49:36.717236750Z" level=info msg="connecting to shim 4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" protocol=ttrpc version=3 May 13 23:49:36.761468 systemd[1]: Started cri-containerd-4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1.scope - libcontainer container 4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1. May 13 23:49:36.812714 containerd[1461]: time="2025-05-13T23:49:36.812611107Z" level=info msg="StartContainer for \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" returns successfully" May 13 23:49:36.879653 systemd[1]: cri-containerd-4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1.scope: Deactivated successfully. May 13 23:49:36.880033 systemd[1]: cri-containerd-4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1.scope: Consumed 76ms CPU time, 8.8M memory peak, 3.1M written to disk. May 13 23:49:36.908538 containerd[1461]: time="2025-05-13T23:49:36.908478188Z" level=info msg="received exit event container_id:\"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" id:\"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" pid:2973 exited_at:{seconds:1747180176 nanos:898017075}" May 13 23:49:36.908742 containerd[1461]: time="2025-05-13T23:49:36.908598110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" id:\"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" pid:2973 exited_at:{seconds:1747180176 nanos:898017075}" May 13 23:49:37.455209 containerd[1461]: time="2025-05-13T23:49:37.455166500Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:49:37.463864 containerd[1461]: time="2025-05-13T23:49:37.463813869Z" level=info msg="Container 418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:37.469739 containerd[1461]: time="2025-05-13T23:49:37.469641448Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\"" May 13 23:49:37.470942 containerd[1461]: time="2025-05-13T23:49:37.470508937Z" level=info msg="StartContainer for \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\"" May 13 23:49:37.472023 containerd[1461]: time="2025-05-13T23:49:37.471377306Z" level=info msg="connecting to shim 418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" protocol=ttrpc version=3 May 13 23:49:37.491457 systemd[1]: Started cri-containerd-418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8.scope - libcontainer container 418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8. May 13 23:49:37.520545 containerd[1461]: time="2025-05-13T23:49:37.520507006Z" level=info msg="StartContainer for \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" returns successfully" May 13 23:49:37.542004 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:49:37.542249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:49:37.542463 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:49:37.543999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:49:37.544351 update_engine[1453]: I20250513 23:49:37.544301 1453 update_attempter.cc:509] Updating boot flags... May 13 23:49:37.545806 systemd[1]: cri-containerd-418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8.scope: Deactivated successfully. May 13 23:49:37.556420 containerd[1461]: time="2025-05-13T23:49:37.556369131Z" level=info msg="received exit event container_id:\"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" id:\"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" pid:3018 exited_at:{seconds:1747180177 nanos:556139329}" May 13 23:49:37.556686 containerd[1461]: time="2025-05-13T23:49:37.556662374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" id:\"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" pid:3018 exited_at:{seconds:1747180177 nanos:556139329}" May 13 23:49:37.583833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:49:37.584350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3054) May 13 23:49:37.622370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3057) May 13 23:49:37.654364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3057) May 13 23:49:37.694893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1-rootfs.mount: Deactivated successfully. May 13 23:49:38.456229 containerd[1461]: time="2025-05-13T23:49:38.456181932Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:49:38.464805 containerd[1461]: time="2025-05-13T23:49:38.464763294Z" level=info msg="Container af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:38.486154 containerd[1461]: time="2025-05-13T23:49:38.486089417Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\"" May 13 23:49:38.487680 containerd[1461]: time="2025-05-13T23:49:38.487656072Z" level=info msg="StartContainer for \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\"" May 13 23:49:38.489059 containerd[1461]: time="2025-05-13T23:49:38.489026605Z" level=info msg="connecting to shim af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" protocol=ttrpc version=3 May 13 23:49:38.509444 systemd[1]: Started cri-containerd-af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021.scope - libcontainer container af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021. May 13 23:49:38.542740 containerd[1461]: time="2025-05-13T23:49:38.542699918Z" level=info msg="StartContainer for \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" returns successfully" May 13 23:49:38.567503 systemd[1]: cri-containerd-af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021.scope: Deactivated successfully. May 13 23:49:38.568852 containerd[1461]: time="2025-05-13T23:49:38.568802767Z" level=info msg="received exit event container_id:\"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" id:\"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" pid:3080 exited_at:{seconds:1747180178 nanos:568069520}" May 13 23:49:38.569387 containerd[1461]: time="2025-05-13T23:49:38.569182131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" id:\"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" pid:3080 exited_at:{seconds:1747180178 nanos:568069520}" May 13 23:49:38.588135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021-rootfs.mount: Deactivated successfully. May 13 23:49:38.951489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746198359.mount: Deactivated successfully. May 13 23:49:39.461062 containerd[1461]: time="2025-05-13T23:49:39.460939855Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:49:39.483296 containerd[1461]: time="2025-05-13T23:49:39.481462159Z" level=info msg="Container 72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:39.499869 containerd[1461]: time="2025-05-13T23:49:39.499830564Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\"" May 13 23:49:39.500520 containerd[1461]: time="2025-05-13T23:49:39.500489169Z" level=info msg="StartContainer for \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\"" May 13 23:49:39.504528 containerd[1461]: time="2025-05-13T23:49:39.504497845Z" level=info msg="connecting to shim 72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" protocol=ttrpc version=3 May 13 23:49:39.532313 systemd[1]: Started cri-containerd-72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa.scope - libcontainer container 72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa. May 13 23:49:39.568657 systemd[1]: cri-containerd-72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa.scope: Deactivated successfully. May 13 23:49:39.569449 containerd[1461]: time="2025-05-13T23:49:39.569254985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" id:\"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" pid:3132 exited_at:{seconds:1747180179 nanos:569033343}" May 13 23:49:39.574447 containerd[1461]: time="2025-05-13T23:49:39.574322991Z" level=info msg="StartContainer for \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" returns successfully" May 13 23:49:39.587025 containerd[1461]: time="2025-05-13T23:49:39.586824102Z" level=info msg="received exit event container_id:\"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" id:\"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" pid:3132 exited_at:{seconds:1747180179 nanos:569033343}" May 13 23:49:39.613474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa-rootfs.mount: Deactivated successfully. May 13 23:49:39.775297 containerd[1461]: time="2025-05-13T23:49:39.775211829Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:39.776157 containerd[1461]: time="2025-05-13T23:49:39.775761314Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 23:49:39.776636 containerd[1461]: time="2025-05-13T23:49:39.776613922Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:39.777896 containerd[1461]: time="2025-05-13T23:49:39.777868293Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.093736542s" May 13 23:49:39.777950 containerd[1461]: time="2025-05-13T23:49:39.777905013Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 23:49:39.780320 containerd[1461]: time="2025-05-13T23:49:39.780286594Z" level=info msg="CreateContainer within sandbox \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:49:39.786594 containerd[1461]: time="2025-05-13T23:49:39.786539410Z" level=info msg="Container d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:39.796823 containerd[1461]: time="2025-05-13T23:49:39.796770302Z" level=info msg="CreateContainer within sandbox \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\"" May 13 23:49:39.798104 containerd[1461]: time="2025-05-13T23:49:39.797367467Z" level=info msg="StartContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\"" May 13 23:49:39.798244 containerd[1461]: time="2025-05-13T23:49:39.798211355Z" level=info msg="connecting to shim d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0" address="unix:///run/containerd/s/178cf2c25cfd6125f7864848ecd2c768950817415fab070ca1ed23632464a3e3" protocol=ttrpc version=3 May 13 23:49:39.836510 systemd[1]: Started cri-containerd-d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0.scope - libcontainer container d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0. May 13 23:49:39.889926 containerd[1461]: time="2025-05-13T23:49:39.887147791Z" level=info msg="StartContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" returns successfully" May 13 23:49:40.477741 containerd[1461]: time="2025-05-13T23:49:40.477595771Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:49:40.498520 containerd[1461]: time="2025-05-13T23:49:40.498469106Z" level=info msg="Container 4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:40.513014 containerd[1461]: time="2025-05-13T23:49:40.512125621Z" level=info msg="CreateContainer within sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\"" May 13 23:49:40.513014 containerd[1461]: time="2025-05-13T23:49:40.512841027Z" level=info msg="StartContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\"" May 13 23:49:40.513801 containerd[1461]: time="2025-05-13T23:49:40.513767674Z" level=info msg="connecting to shim 4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea" address="unix:///run/containerd/s/3ee82e2db696a8242712284c0a1d0f5594e252e74c541aa9c8525e2d9ca1f41e" protocol=ttrpc version=3 May 13 23:49:40.542172 kubelet[2552]: I0513 23:49:40.541457 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vc7dg" podStartSLOduration=1.443321962 podStartE2EDuration="14.541433867s" podCreationTimestamp="2025-05-13 23:49:26 +0000 UTC" firstStartedPulling="2025-05-13 23:49:26.680886398 +0000 UTC m=+6.388406703" lastFinishedPulling="2025-05-13 23:49:39.778998343 +0000 UTC m=+19.486518608" observedRunningTime="2025-05-13 23:49:40.539060367 +0000 UTC m=+20.246580672" watchObservedRunningTime="2025-05-13 23:49:40.541433867 +0000 UTC m=+20.248954212" May 13 23:49:40.563469 systemd[1]: Started cri-containerd-4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea.scope - libcontainer container 4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea. May 13 23:49:40.607833 containerd[1461]: time="2025-05-13T23:49:40.607763223Z" level=info msg="StartContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" returns successfully" May 13 23:49:40.786486 containerd[1461]: time="2025-05-13T23:49:40.786203561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" id:\"0acbf1f6e2a26da7e15bf4c4cc0950a7ca4432fbfa3844f16e218167e76c5d85\" pid:3239 exited_at:{seconds:1747180180 nanos:785913358}" May 13 23:49:40.842344 kubelet[2552]: I0513 23:49:40.842302 2552 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:49:40.921943 systemd[1]: Created slice kubepods-burstable-pode852da65_adb9_4ba7_a365_d0f094875308.slice - libcontainer container kubepods-burstable-pode852da65_adb9_4ba7_a365_d0f094875308.slice. May 13 23:49:40.931417 systemd[1]: Created slice kubepods-burstable-podeef14455_df04_4639_982c_106fa32bdc46.slice - libcontainer container kubepods-burstable-podeef14455_df04_4639_982c_106fa32bdc46.slice. May 13 23:49:40.939464 kubelet[2552]: I0513 23:49:40.939302 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7ph\" (UniqueName: \"kubernetes.io/projected/eef14455-df04-4639-982c-106fa32bdc46-kube-api-access-5p7ph\") pod \"coredns-6f6b679f8f-9nxcs\" (UID: \"eef14455-df04-4639-982c-106fa32bdc46\") " pod="kube-system/coredns-6f6b679f8f-9nxcs" May 13 23:49:40.939464 kubelet[2552]: I0513 23:49:40.939351 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e852da65-adb9-4ba7-a365-d0f094875308-config-volume\") pod \"coredns-6f6b679f8f-wb47n\" (UID: \"e852da65-adb9-4ba7-a365-d0f094875308\") " pod="kube-system/coredns-6f6b679f8f-wb47n" May 13 23:49:40.939464 kubelet[2552]: I0513 23:49:40.939374 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vx52\" (UniqueName: \"kubernetes.io/projected/e852da65-adb9-4ba7-a365-d0f094875308-kube-api-access-5vx52\") pod \"coredns-6f6b679f8f-wb47n\" (UID: \"e852da65-adb9-4ba7-a365-d0f094875308\") " pod="kube-system/coredns-6f6b679f8f-wb47n" May 13 23:49:40.939464 kubelet[2552]: I0513 23:49:40.939391 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef14455-df04-4639-982c-106fa32bdc46-config-volume\") pod \"coredns-6f6b679f8f-9nxcs\" (UID: \"eef14455-df04-4639-982c-106fa32bdc46\") " pod="kube-system/coredns-6f6b679f8f-9nxcs" May 13 23:49:41.226801 containerd[1461]: time="2025-05-13T23:49:41.226676581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wb47n,Uid:e852da65-adb9-4ba7-a365-d0f094875308,Namespace:kube-system,Attempt:0,}" May 13 23:49:41.235714 containerd[1461]: time="2025-05-13T23:49:41.235673011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9nxcs,Uid:eef14455-df04-4639-982c-106fa32bdc46,Namespace:kube-system,Attempt:0,}" May 13 23:49:41.521882 kubelet[2552]: I0513 23:49:41.521773 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cp9p5" podStartSLOduration=6.435274777 podStartE2EDuration="16.521755223s" podCreationTimestamp="2025-05-13 23:49:25 +0000 UTC" firstStartedPulling="2025-05-13 23:49:26.596814016 +0000 UTC m=+6.304334321" lastFinishedPulling="2025-05-13 23:49:36.683294462 +0000 UTC m=+16.390814767" observedRunningTime="2025-05-13 23:49:41.520761215 +0000 UTC m=+21.228281520" watchObservedRunningTime="2025-05-13 23:49:41.521755223 +0000 UTC m=+21.229275528" May 13 23:49:44.017288 systemd-networkd[1405]: cilium_host: Link UP May 13 23:49:44.017469 systemd-networkd[1405]: cilium_net: Link UP May 13 23:49:44.017621 systemd-networkd[1405]: cilium_net: Gained carrier May 13 23:49:44.017750 systemd-networkd[1405]: cilium_host: Gained carrier May 13 23:49:44.017841 systemd-networkd[1405]: cilium_net: Gained IPv6LL May 13 23:49:44.017964 systemd-networkd[1405]: cilium_host: Gained IPv6LL May 13 23:49:44.128491 systemd-networkd[1405]: cilium_vxlan: Link UP May 13 23:49:44.128497 systemd-networkd[1405]: cilium_vxlan: Gained carrier May 13 23:49:44.516348 kernel: NET: Registered PF_ALG protocol family May 13 23:49:45.141705 systemd-networkd[1405]: lxc_health: Link UP May 13 23:49:45.144561 systemd-networkd[1405]: lxc_health: Gained carrier May 13 23:49:45.426894 systemd-networkd[1405]: lxc73da35cb1e2a: Link UP May 13 23:49:45.427613 kernel: eth0: renamed from tmp39073 May 13 23:49:45.442344 kernel: eth0: renamed from tmp2a0b5 May 13 23:49:45.451449 systemd-networkd[1405]: lxc7c009d7650de: Link UP May 13 23:49:45.452175 systemd-networkd[1405]: lxc73da35cb1e2a: Gained carrier May 13 23:49:45.452704 systemd-networkd[1405]: lxc7c009d7650de: Gained carrier May 13 23:49:46.145446 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL May 13 23:49:46.785412 systemd-networkd[1405]: lxc_health: Gained IPv6LL May 13 23:49:47.040432 systemd-networkd[1405]: lxc73da35cb1e2a: Gained IPv6LL May 13 23:49:47.424411 systemd-networkd[1405]: lxc7c009d7650de: Gained IPv6LL May 13 23:49:49.259894 containerd[1461]: time="2025-05-13T23:49:49.259821920Z" level=info msg="connecting to shim 2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede" address="unix:///run/containerd/s/67152a572a426424ef9763ee2ab87ecf1912f5d48ee31e455db05f8fa3733840" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:49.263049 containerd[1461]: time="2025-05-13T23:49:49.260112082Z" level=info msg="connecting to shim 39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9" address="unix:///run/containerd/s/a09550d98fb61a4b337fd9131ac389ce4d9bf5e6c050ffc45b2f86dab93b6518" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:49.294440 systemd[1]: Started cri-containerd-2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede.scope - libcontainer container 2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede. May 13 23:49:49.297618 systemd[1]: Started cri-containerd-39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9.scope - libcontainer container 39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9. May 13 23:49:49.310971 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:49.314218 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:49.342529 containerd[1461]: time="2025-05-13T23:49:49.342486668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wb47n,Uid:e852da65-adb9-4ba7-a365-d0f094875308,Namespace:kube-system,Attempt:0,} returns sandbox id \"39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9\"" May 13 23:49:49.345313 containerd[1461]: time="2025-05-13T23:49:49.345152521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9nxcs,Uid:eef14455-df04-4639-982c-106fa32bdc46,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede\"" May 13 23:49:49.346986 containerd[1461]: time="2025-05-13T23:49:49.346449167Z" level=info msg="CreateContainer within sandbox \"39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:49:49.348766 containerd[1461]: time="2025-05-13T23:49:49.348443496Z" level=info msg="CreateContainer within sandbox \"2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:49:49.358523 containerd[1461]: time="2025-05-13T23:49:49.358471103Z" level=info msg="Container 365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:49.364536 containerd[1461]: time="2025-05-13T23:49:49.364495132Z" level=info msg="Container ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:49.369010 containerd[1461]: time="2025-05-13T23:49:49.368948393Z" level=info msg="CreateContainer within sandbox \"2a0b5881bf4f401e729e265d83db0a959330296adb462bd2ea790731d0c26ede\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321\"" May 13 23:49:49.371966 containerd[1461]: time="2025-05-13T23:49:49.369590716Z" level=info msg="StartContainer for \"365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321\"" May 13 23:49:49.371966 containerd[1461]: time="2025-05-13T23:49:49.370479640Z" level=info msg="connecting to shim 365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321" address="unix:///run/containerd/s/67152a572a426424ef9763ee2ab87ecf1912f5d48ee31e455db05f8fa3733840" protocol=ttrpc version=3 May 13 23:49:49.374801 containerd[1461]: time="2025-05-13T23:49:49.374755420Z" level=info msg="CreateContainer within sandbox \"39073ee56878c9155282be83c22f1b7a5d3f4921342877d71d0b9939363030c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e\"" May 13 23:49:49.376497 containerd[1461]: time="2025-05-13T23:49:49.375402943Z" level=info msg="StartContainer for \"ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e\"" May 13 23:49:49.376497 containerd[1461]: time="2025-05-13T23:49:49.376204307Z" level=info msg="connecting to shim ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e" address="unix:///run/containerd/s/a09550d98fb61a4b337fd9131ac389ce4d9bf5e6c050ffc45b2f86dab93b6518" protocol=ttrpc version=3 May 13 23:49:49.404468 systemd[1]: Started cri-containerd-365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321.scope - libcontainer container 365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321. May 13 23:49:49.420463 systemd[1]: Started cri-containerd-ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e.scope - libcontainer container ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e. May 13 23:49:49.454916 containerd[1461]: time="2025-05-13T23:49:49.454876556Z" level=info msg="StartContainer for \"365552a645f92be7742b9032a8609c5927558b3af7cadd6b870d4bea04098321\" returns successfully" May 13 23:49:49.500506 containerd[1461]: time="2025-05-13T23:49:49.500454370Z" level=info msg="StartContainer for \"ff384dd25c6ceeddb297fbb99e03986bd6e3b28e8bae33f3a0354ee75e8bb65e\" returns successfully" May 13 23:49:49.575875 kubelet[2552]: I0513 23:49:49.575261 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9nxcs" podStartSLOduration=23.575244401 podStartE2EDuration="23.575244401s" podCreationTimestamp="2025-05-13 23:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:49.571761185 +0000 UTC m=+29.279281610" watchObservedRunningTime="2025-05-13 23:49:49.575244401 +0000 UTC m=+29.282764706" May 13 23:49:49.593146 kubelet[2552]: I0513 23:49:49.592660 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wb47n" podStartSLOduration=23.592643683 podStartE2EDuration="23.592643683s" podCreationTimestamp="2025-05-13 23:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:49.59193112 +0000 UTC m=+29.299451465" watchObservedRunningTime="2025-05-13 23:49:49.592643683 +0000 UTC m=+29.300163988" May 13 23:49:50.128148 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:52252.service - OpenSSH per-connection server daemon (10.0.0.1:52252). May 13 23:49:50.199773 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 52252 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:50.201887 sshd-session[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:50.206570 systemd-logind[1450]: New session 8 of user core. May 13 23:49:50.216477 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:49:50.225025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677952065.mount: Deactivated successfully. May 13 23:49:50.356099 sshd[3903]: Connection closed by 10.0.0.1 port 52252 May 13 23:49:50.357604 sshd-session[3901]: pam_unix(sshd:session): session closed for user core May 13 23:49:50.365668 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:52252.service: Deactivated successfully. May 13 23:49:50.368957 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:49:50.370101 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. May 13 23:49:50.371186 systemd-logind[1450]: Removed session 8. May 13 23:49:55.368474 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:42756.service - OpenSSH per-connection server daemon (10.0.0.1:42756). May 13 23:49:55.435858 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 42756 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:55.437671 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:55.444375 systemd-logind[1450]: New session 9 of user core. May 13 23:49:55.455520 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:49:55.584326 sshd[3929]: Connection closed by 10.0.0.1 port 42756 May 13 23:49:55.584560 sshd-session[3927]: pam_unix(sshd:session): session closed for user core May 13 23:49:55.588240 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:42756.service: Deactivated successfully. May 13 23:49:55.590092 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:49:55.594319 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. May 13 23:49:55.596705 systemd-logind[1450]: Removed session 9. May 13 23:50:00.594477 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:42758.service - OpenSSH per-connection server daemon (10.0.0.1:42758). May 13 23:50:00.643618 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 42758 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:00.645209 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:00.651321 systemd-logind[1450]: New session 10 of user core. May 13 23:50:00.657432 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:50:00.785732 sshd[3947]: Connection closed by 10.0.0.1 port 42758 May 13 23:50:00.786302 sshd-session[3945]: pam_unix(sshd:session): session closed for user core May 13 23:50:00.795466 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:42758.service: Deactivated successfully. May 13 23:50:00.797188 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:50:00.798914 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. May 13 23:50:00.800863 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). May 13 23:50:00.802152 systemd-logind[1450]: Removed session 10. May 13 23:50:00.858929 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:00.860241 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:00.868198 systemd-logind[1450]: New session 11 of user core. May 13 23:50:00.878504 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:50:01.045348 sshd[3963]: Connection closed by 10.0.0.1 port 42770 May 13 23:50:01.046846 sshd-session[3960]: pam_unix(sshd:session): session closed for user core May 13 23:50:01.056402 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:42770.service: Deactivated successfully. May 13 23:50:01.057869 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:50:01.060677 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. May 13 23:50:01.065627 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:42778.service - OpenSSH per-connection server daemon (10.0.0.1:42778). May 13 23:50:01.066811 systemd-logind[1450]: Removed session 11. May 13 23:50:01.124717 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 42778 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:01.125959 sshd-session[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:01.131495 systemd-logind[1450]: New session 12 of user core. May 13 23:50:01.141506 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:50:01.264767 sshd[3977]: Connection closed by 10.0.0.1 port 42778 May 13 23:50:01.265974 sshd-session[3974]: pam_unix(sshd:session): session closed for user core May 13 23:50:01.270954 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:42778.service: Deactivated successfully. May 13 23:50:01.273185 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:50:01.274055 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. May 13 23:50:01.275061 systemd-logind[1450]: Removed session 12. May 13 23:50:06.279684 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:54836.service - OpenSSH per-connection server daemon (10.0.0.1:54836). May 13 23:50:06.332502 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 54836 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:06.333917 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:06.342124 systemd-logind[1450]: New session 13 of user core. May 13 23:50:06.351497 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:50:06.476258 sshd[3992]: Connection closed by 10.0.0.1 port 54836 May 13 23:50:06.476704 sshd-session[3990]: pam_unix(sshd:session): session closed for user core May 13 23:50:06.482547 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:54836.service: Deactivated successfully. May 13 23:50:06.486413 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:50:06.487620 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. May 13 23:50:06.488845 systemd-logind[1450]: Removed session 13. May 13 23:50:11.495053 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). May 13 23:50:11.552212 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:11.552827 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:11.564758 systemd-logind[1450]: New session 14 of user core. May 13 23:50:11.577540 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:50:11.727952 sshd[4008]: Connection closed by 10.0.0.1 port 54846 May 13 23:50:11.728734 sshd-session[4006]: pam_unix(sshd:session): session closed for user core May 13 23:50:11.746928 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:54846.service: Deactivated successfully. May 13 23:50:11.751003 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:50:11.752101 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. May 13 23:50:11.756654 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). May 13 23:50:11.758286 systemd-logind[1450]: Removed session 14. May 13 23:50:11.820190 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:11.821910 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:11.827844 systemd-logind[1450]: New session 15 of user core. May 13 23:50:11.842533 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:50:12.169944 sshd[4023]: Connection closed by 10.0.0.1 port 54852 May 13 23:50:12.170789 sshd-session[4020]: pam_unix(sshd:session): session closed for user core May 13 23:50:12.178186 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:54852.service: Deactivated successfully. May 13 23:50:12.180375 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:50:12.181287 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. May 13 23:50:12.184300 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:54868.service - OpenSSH per-connection server daemon (10.0.0.1:54868). May 13 23:50:12.186203 systemd-logind[1450]: Removed session 15. May 13 23:50:12.245096 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 54868 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:12.247252 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:12.254771 systemd-logind[1450]: New session 16 of user core. May 13 23:50:12.264513 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:50:13.732341 sshd[4036]: Connection closed by 10.0.0.1 port 54868 May 13 23:50:13.732949 sshd-session[4033]: pam_unix(sshd:session): session closed for user core May 13 23:50:13.750801 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:54868.service: Deactivated successfully. May 13 23:50:13.756136 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:50:13.759588 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. May 13 23:50:13.762733 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:35576.service - OpenSSH per-connection server daemon (10.0.0.1:35576). May 13 23:50:13.764065 systemd-logind[1450]: Removed session 16. May 13 23:50:13.822852 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 35576 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:13.824670 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:13.830804 systemd-logind[1450]: New session 17 of user core. May 13 23:50:13.838514 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:50:14.091622 sshd[4073]: Connection closed by 10.0.0.1 port 35576 May 13 23:50:14.092524 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 13 23:50:14.103093 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:35576.service: Deactivated successfully. May 13 23:50:14.107390 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:50:14.110296 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. May 13 23:50:14.114111 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:35584.service - OpenSSH per-connection server daemon (10.0.0.1:35584). May 13 23:50:14.116810 systemd-logind[1450]: Removed session 17. May 13 23:50:14.174554 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 35584 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:14.176472 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:14.183974 systemd-logind[1450]: New session 18 of user core. May 13 23:50:14.195522 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:50:14.317440 sshd[4087]: Connection closed by 10.0.0.1 port 35584 May 13 23:50:14.317853 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 13 23:50:14.323341 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:35584.service: Deactivated successfully. May 13 23:50:14.325476 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:50:14.326324 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. May 13 23:50:14.327182 systemd-logind[1450]: Removed session 18. May 13 23:50:19.333961 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:35596.service - OpenSSH per-connection server daemon (10.0.0.1:35596). May 13 23:50:19.390435 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 35596 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:19.392447 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:19.397308 systemd-logind[1450]: New session 19 of user core. May 13 23:50:19.408528 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:50:19.533852 sshd[4106]: Connection closed by 10.0.0.1 port 35596 May 13 23:50:19.534255 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 13 23:50:19.537710 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:35596.service: Deactivated successfully. May 13 23:50:19.540012 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:50:19.542280 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. May 13 23:50:19.543514 systemd-logind[1450]: Removed session 19. May 13 23:50:24.546574 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). May 13 23:50:24.600050 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:24.601632 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:24.606300 systemd-logind[1450]: New session 20 of user core. May 13 23:50:24.616502 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:50:24.746670 sshd[4123]: Connection closed by 10.0.0.1 port 36122 May 13 23:50:24.747546 sshd-session[4121]: pam_unix(sshd:session): session closed for user core May 13 23:50:24.751828 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:36122.service: Deactivated successfully. May 13 23:50:24.753703 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:50:24.754514 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. May 13 23:50:24.755711 systemd-logind[1450]: Removed session 20. May 13 23:50:29.761985 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:36134.service - OpenSSH per-connection server daemon (10.0.0.1:36134). May 13 23:50:29.823858 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 36134 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:29.826140 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:29.830997 systemd-logind[1450]: New session 21 of user core. May 13 23:50:29.842489 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:50:29.957308 sshd[4141]: Connection closed by 10.0.0.1 port 36134 May 13 23:50:29.957667 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 13 23:50:29.971667 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:36134.service: Deactivated successfully. May 13 23:50:29.973834 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:50:29.974685 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. May 13 23:50:29.978258 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:36138.service - OpenSSH per-connection server daemon (10.0.0.1:36138). May 13 23:50:29.980754 systemd-logind[1450]: Removed session 21. May 13 23:50:30.034912 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 36138 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:30.036176 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:30.040762 systemd-logind[1450]: New session 22 of user core. May 13 23:50:30.050469 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:50:32.444739 containerd[1461]: time="2025-05-13T23:50:32.444572899Z" level=info msg="StopContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" with timeout 30 (s)" May 13 23:50:32.448405 containerd[1461]: time="2025-05-13T23:50:32.448377239Z" level=info msg="Stop container \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" with signal terminated" May 13 23:50:32.470608 systemd[1]: cri-containerd-d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0.scope: Deactivated successfully. May 13 23:50:32.473716 containerd[1461]: time="2025-05-13T23:50:32.472546272Z" level=info msg="received exit event container_id:\"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" id:\"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" pid:3174 exited_at:{seconds:1747180232 nanos:472126221}" May 13 23:50:32.473716 containerd[1461]: time="2025-05-13T23:50:32.472708237Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" id:\"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" pid:3174 exited_at:{seconds:1747180232 nanos:472126221}" May 13 23:50:32.495872 containerd[1461]: time="2025-05-13T23:50:32.495812082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" id:\"cb4fa9234ea458a7584eb957398489b79c1a3a717141133bfe6293007645c9f2\" pid:4183 exited_at:{seconds:1747180232 nanos:495510034}" May 13 23:50:32.499932 containerd[1461]: time="2025-05-13T23:50:32.499777826Z" level=info msg="StopContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" with timeout 2 (s)" May 13 23:50:32.500398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0-rootfs.mount: Deactivated successfully. May 13 23:50:32.500783 containerd[1461]: time="2025-05-13T23:50:32.500647369Z" level=info msg="Stop container \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" with signal terminated" May 13 23:50:32.503537 containerd[1461]: time="2025-05-13T23:50:32.503475003Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:50:32.507086 systemd-networkd[1405]: lxc_health: Link DOWN May 13 23:50:32.507093 systemd-networkd[1405]: lxc_health: Lost carrier May 13 23:50:32.521160 containerd[1461]: time="2025-05-13T23:50:32.521099345Z" level=info msg="StopContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" returns successfully" May 13 23:50:32.521745 containerd[1461]: time="2025-05-13T23:50:32.521715881Z" level=info msg="StopPodSandbox for \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\"" May 13 23:50:32.521865 containerd[1461]: time="2025-05-13T23:50:32.521847244Z" level=info msg="Container to stop \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.523097 systemd[1]: cri-containerd-4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea.scope: Deactivated successfully. May 13 23:50:32.523441 systemd[1]: cri-containerd-4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea.scope: Consumed 7.341s CPU time, 126.7M memory peak, 1.4M read from disk, 12.9M written to disk. May 13 23:50:32.531706 containerd[1461]: time="2025-05-13T23:50:32.531662781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" pid:3208 exited_at:{seconds:1747180232 nanos:531331733}" May 13 23:50:32.532618 containerd[1461]: time="2025-05-13T23:50:32.531772824Z" level=info msg="received exit event container_id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" id:\"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" pid:3208 exited_at:{seconds:1747180232 nanos:531331733}" May 13 23:50:32.533441 systemd[1]: cri-containerd-392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616.scope: Deactivated successfully. May 13 23:50:32.538926 containerd[1461]: time="2025-05-13T23:50:32.538875570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" id:\"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" pid:2770 exit_status:137 exited_at:{seconds:1747180232 nanos:538563162}" May 13 23:50:32.550168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea-rootfs.mount: Deactivated successfully. May 13 23:50:32.567626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616-rootfs.mount: Deactivated successfully. May 13 23:50:32.568002 containerd[1461]: time="2025-05-13T23:50:32.567967213Z" level=info msg="StopContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" returns successfully" May 13 23:50:32.568531 containerd[1461]: time="2025-05-13T23:50:32.568441825Z" level=info msg="StopPodSandbox for \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\"" May 13 23:50:32.568531 containerd[1461]: time="2025-05-13T23:50:32.568515747Z" level=info msg="Container to stop \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.568531 containerd[1461]: time="2025-05-13T23:50:32.568528267Z" level=info msg="Container to stop \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.568627 containerd[1461]: time="2025-05-13T23:50:32.568536788Z" level=info msg="Container to stop \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.568627 containerd[1461]: time="2025-05-13T23:50:32.568546308Z" level=info msg="Container to stop \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.568627 containerd[1461]: time="2025-05-13T23:50:32.568553828Z" level=info msg="Container to stop \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:50:32.569653 containerd[1461]: time="2025-05-13T23:50:32.569627656Z" level=info msg="shim disconnected" id=392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616 namespace=k8s.io May 13 23:50:32.569846 containerd[1461]: time="2025-05-13T23:50:32.569747939Z" level=warning msg="cleaning up after shim disconnected" id=392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616 namespace=k8s.io May 13 23:50:32.569846 containerd[1461]: time="2025-05-13T23:50:32.569784140Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:50:32.574926 systemd[1]: cri-containerd-5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213.scope: Deactivated successfully. May 13 23:50:32.591514 containerd[1461]: time="2025-05-13T23:50:32.588188743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" id:\"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" pid:2712 exit_status:137 exited_at:{seconds:1747180232 nanos:577279137}" May 13 23:50:32.591514 containerd[1461]: time="2025-05-13T23:50:32.591182421Z" level=info msg="received exit event sandbox_id:\"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" exit_status:137 exited_at:{seconds:1747180232 nanos:538563162}" May 13 23:50:32.590758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616-shm.mount: Deactivated successfully. May 13 23:50:32.594486 containerd[1461]: time="2025-05-13T23:50:32.594440946Z" level=info msg="TearDown network for sandbox \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" successfully" May 13 23:50:32.594486 containerd[1461]: time="2025-05-13T23:50:32.594474467Z" level=info msg="StopPodSandbox for \"392401a4276824e21b894728fdc6f463ae85b8c148a2406dfd0780a0ec87e616\" returns successfully" May 13 23:50:32.600088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213-rootfs.mount: Deactivated successfully. May 13 23:50:32.606013 containerd[1461]: time="2025-05-13T23:50:32.605963688Z" level=info msg="received exit event sandbox_id:\"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" exit_status:137 exited_at:{seconds:1747180232 nanos:577279137}" May 13 23:50:32.606697 containerd[1461]: time="2025-05-13T23:50:32.606625426Z" level=info msg="TearDown network for sandbox \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" successfully" May 13 23:50:32.606697 containerd[1461]: time="2025-05-13T23:50:32.606649106Z" level=info msg="StopPodSandbox for \"5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213\" returns successfully" May 13 23:50:32.615586 containerd[1461]: time="2025-05-13T23:50:32.615518699Z" level=info msg="shim disconnected" id=5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213 namespace=k8s.io May 13 23:50:32.615939 containerd[1461]: time="2025-05-13T23:50:32.615725464Z" level=warning msg="cleaning up after shim disconnected" id=5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213 namespace=k8s.io May 13 23:50:32.615939 containerd[1461]: time="2025-05-13T23:50:32.615778186Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:50:32.631109 kubelet[2552]: I0513 23:50:32.631055 2552 scope.go:117] "RemoveContainer" containerID="d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0" May 13 23:50:32.634175 containerd[1461]: time="2025-05-13T23:50:32.633019357Z" level=info msg="RemoveContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\"" May 13 23:50:32.643514 containerd[1461]: time="2025-05-13T23:50:32.643476791Z" level=info msg="RemoveContainer for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" returns successfully" May 13 23:50:32.643825 kubelet[2552]: I0513 23:50:32.643795 2552 scope.go:117] "RemoveContainer" containerID="d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0" May 13 23:50:32.644138 containerd[1461]: time="2025-05-13T23:50:32.644052087Z" level=error msg="ContainerStatus for \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\": not found" May 13 23:50:32.661086 kubelet[2552]: E0513 23:50:32.661019 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\": not found" containerID="d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0" May 13 23:50:32.661241 kubelet[2552]: I0513 23:50:32.661093 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0"} err="failed to get container status \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d41c9a00497e3de157b25320fa4055d29f615f0903ca6896efe869b0cd0351b0\": not found" May 13 23:50:32.661283 kubelet[2552]: I0513 23:50:32.661245 2552 scope.go:117] "RemoveContainer" containerID="4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea" May 13 23:50:32.663448 containerd[1461]: time="2025-05-13T23:50:32.663415114Z" level=info msg="RemoveContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\"" May 13 23:50:32.667925 containerd[1461]: time="2025-05-13T23:50:32.667888951Z" level=info msg="RemoveContainer for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" returns successfully" May 13 23:50:32.668164 kubelet[2552]: I0513 23:50:32.668136 2552 scope.go:117] "RemoveContainer" containerID="72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa" May 13 23:50:32.669472 containerd[1461]: time="2025-05-13T23:50:32.669449912Z" level=info msg="RemoveContainer for \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\"" May 13 23:50:32.673011 containerd[1461]: time="2025-05-13T23:50:32.672914443Z" level=info msg="RemoveContainer for \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" returns successfully" May 13 23:50:32.673134 kubelet[2552]: I0513 23:50:32.673105 2552 scope.go:117] "RemoveContainer" containerID="af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021" May 13 23:50:32.675121 containerd[1461]: time="2025-05-13T23:50:32.675099780Z" level=info msg="RemoveContainer for \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\"" May 13 23:50:32.680926 containerd[1461]: time="2025-05-13T23:50:32.680823970Z" level=info msg="RemoveContainer for \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" returns successfully" May 13 23:50:32.681020 kubelet[2552]: I0513 23:50:32.680995 2552 scope.go:117] "RemoveContainer" containerID="418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8" May 13 23:50:32.682453 containerd[1461]: time="2025-05-13T23:50:32.682403891Z" level=info msg="RemoveContainer for \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\"" May 13 23:50:32.685302 containerd[1461]: time="2025-05-13T23:50:32.685252526Z" level=info msg="RemoveContainer for \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" returns successfully" May 13 23:50:32.685519 kubelet[2552]: I0513 23:50:32.685491 2552 scope.go:117] "RemoveContainer" containerID="4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1" May 13 23:50:32.687298 containerd[1461]: time="2025-05-13T23:50:32.686762326Z" level=info msg="RemoveContainer for \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\"" May 13 23:50:32.690015 containerd[1461]: time="2025-05-13T23:50:32.689971370Z" level=info msg="RemoveContainer for \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" returns successfully" May 13 23:50:32.690312 kubelet[2552]: I0513 23:50:32.690286 2552 scope.go:117] "RemoveContainer" containerID="4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea" May 13 23:50:32.690520 containerd[1461]: time="2025-05-13T23:50:32.690488383Z" level=error msg="ContainerStatus for \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\": not found" May 13 23:50:32.690670 kubelet[2552]: E0513 23:50:32.690625 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\": not found" containerID="4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea" May 13 23:50:32.690698 kubelet[2552]: I0513 23:50:32.690679 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea"} err="failed to get container status \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cece0e7bb658093c6b2ee40cd9ede2240bf26c1a69116a62b850618fe0ef7ea\": not found" May 13 23:50:32.690720 kubelet[2552]: I0513 23:50:32.690703 2552 scope.go:117] "RemoveContainer" containerID="72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa" May 13 23:50:32.690893 containerd[1461]: time="2025-05-13T23:50:32.690849713Z" level=error msg="ContainerStatus for \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\": not found" May 13 23:50:32.691002 kubelet[2552]: E0513 23:50:32.690984 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\": not found" containerID="72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa" May 13 23:50:32.691035 kubelet[2552]: I0513 23:50:32.691006 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa"} err="failed to get container status \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"72384b1e5c9f6a84821d9bdc9daf000764d91636f64b67927de91840154beaaa\": not found" May 13 23:50:32.691035 kubelet[2552]: I0513 23:50:32.691018 2552 scope.go:117] "RemoveContainer" containerID="af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021" May 13 23:50:32.691165 containerd[1461]: time="2025-05-13T23:50:32.691137840Z" level=error msg="ContainerStatus for \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\": not found" May 13 23:50:32.691234 kubelet[2552]: E0513 23:50:32.691206 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\": not found" containerID="af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021" May 13 23:50:32.691234 kubelet[2552]: I0513 23:50:32.691221 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021"} err="failed to get container status \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\": rpc error: code = NotFound desc = an error occurred when try to find container \"af81a09e13df7a47189ebc92021cdd6a2fb8e94fed356591b6e558f8a5a8b021\": not found" May 13 23:50:32.691234 kubelet[2552]: I0513 23:50:32.691231 2552 scope.go:117] "RemoveContainer" containerID="418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8" May 13 23:50:32.691405 containerd[1461]: time="2025-05-13T23:50:32.691377847Z" level=error msg="ContainerStatus for \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\": not found" May 13 23:50:32.691486 kubelet[2552]: E0513 23:50:32.691472 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\": not found" containerID="418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8" May 13 23:50:32.691514 kubelet[2552]: I0513 23:50:32.691490 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8"} err="failed to get container status \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"418d8df77c28735ac0116eb2f9970cde870c2e4aa7ba87e306514df339d1c4b8\": not found" May 13 23:50:32.691514 kubelet[2552]: I0513 23:50:32.691501 2552 scope.go:117] "RemoveContainer" containerID="4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1" May 13 23:50:32.691787 containerd[1461]: time="2025-05-13T23:50:32.691702455Z" level=error msg="ContainerStatus for \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\": not found" May 13 23:50:32.691832 kubelet[2552]: E0513 23:50:32.691820 2552 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\": not found" containerID="4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1" May 13 23:50:32.691864 kubelet[2552]: I0513 23:50:32.691836 2552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1"} err="failed to get container status \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f07b58fc23384a3428501bf46a3b07fa0a58d3a9283dbfa2e57dc4541e614a1\": not found" May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.703891 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84930ac2-c2b2-4a58-a8f2-948cf6a63376-clustermesh-secrets\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.703985 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cni-path\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.704012 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hubble-tls\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.704027 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-kernel\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.704045 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-etc-cni-netd\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.704858 kubelet[2552]: I0513 23:50:32.704807 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0951d21a-8460-4eb7-8698-25535eaa5485-cilium-config-path\") pod \"0951d21a-8460-4eb7-8698-25535eaa5485\" (UID: \"0951d21a-8460-4eb7-8698-25535eaa5485\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704831 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-lib-modules\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704846 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-xtables-lock\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704861 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-cgroup\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704877 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hostproc\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704896 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gj6\" (UniqueName: \"kubernetes.io/projected/0951d21a-8460-4eb7-8698-25535eaa5485-kube-api-access-d6gj6\") pod \"0951d21a-8460-4eb7-8698-25535eaa5485\" (UID: \"0951d21a-8460-4eb7-8698-25535eaa5485\") " May 13 23:50:32.705042 kubelet[2552]: I0513 23:50:32.704914 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-net\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705162 kubelet[2552]: I0513 23:50:32.704928 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-bpf-maps\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705162 kubelet[2552]: I0513 23:50:32.704944 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhr5s\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705162 kubelet[2552]: I0513 23:50:32.704961 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-config-path\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.705162 kubelet[2552]: I0513 23:50:32.704976 2552 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-run\") pod \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\" (UID: \"84930ac2-c2b2-4a58-a8f2-948cf6a63376\") " May 13 23:50:32.707794 kubelet[2552]: I0513 23:50:32.707748 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.707794 kubelet[2552]: I0513 23:50:32.707745 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cni-path" (OuterVolumeSpecName: "cni-path") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.707896 kubelet[2552]: I0513 23:50:32.707813 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.707896 kubelet[2552]: I0513 23:50:32.707830 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.707896 kubelet[2552]: I0513 23:50:32.707848 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.708052 kubelet[2552]: I0513 23:50:32.708031 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hostproc" (OuterVolumeSpecName: "hostproc") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.710926 kubelet[2552]: I0513 23:50:32.709716 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0951d21a-8460-4eb7-8698-25535eaa5485-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0951d21a-8460-4eb7-8698-25535eaa5485" (UID: "0951d21a-8460-4eb7-8698-25535eaa5485"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:50:32.710926 kubelet[2552]: I0513 23:50:32.709778 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.710926 kubelet[2552]: I0513 23:50:32.709794 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.710926 kubelet[2552]: I0513 23:50:32.709811 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.710926 kubelet[2552]: I0513 23:50:32.709830 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:50:32.711091 kubelet[2552]: I0513 23:50:32.710484 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:50:32.711316 kubelet[2552]: I0513 23:50:32.711237 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0951d21a-8460-4eb7-8698-25535eaa5485-kube-api-access-d6gj6" (OuterVolumeSpecName: "kube-api-access-d6gj6") pod "0951d21a-8460-4eb7-8698-25535eaa5485" (UID: "0951d21a-8460-4eb7-8698-25535eaa5485"). InnerVolumeSpecName "kube-api-access-d6gj6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:50:32.711488 kubelet[2552]: I0513 23:50:32.711449 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84930ac2-c2b2-4a58-a8f2-948cf6a63376-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:50:32.712022 kubelet[2552]: I0513 23:50:32.711991 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s" (OuterVolumeSpecName: "kube-api-access-nhr5s") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "kube-api-access-nhr5s". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:50:32.712464 kubelet[2552]: I0513 23:50:32.712429 2552 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84930ac2-c2b2-4a58-a8f2-948cf6a63376" (UID: "84930ac2-c2b2-4a58-a8f2-948cf6a63376"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:50:32.805492 kubelet[2552]: I0513 23:50:32.805449 2552 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805492 kubelet[2552]: I0513 23:50:32.805483 2552 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d6gj6\" (UniqueName: \"kubernetes.io/projected/0951d21a-8460-4eb7-8698-25535eaa5485-kube-api-access-d6gj6\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805492 kubelet[2552]: I0513 23:50:32.805498 2552 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805507 2552 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805517 2552 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805525 2552 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nhr5s\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-kube-api-access-nhr5s\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805546 2552 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84930ac2-c2b2-4a58-a8f2-948cf6a63376-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805554 2552 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805562 2552 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805570 2552 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805686 kubelet[2552]: I0513 23:50:32.805578 2552 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805861 kubelet[2552]: I0513 23:50:32.805586 2552 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0951d21a-8460-4eb7-8698-25535eaa5485-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805861 kubelet[2552]: I0513 23:50:32.805595 2552 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805861 kubelet[2552]: I0513 23:50:32.805603 2552 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805861 kubelet[2552]: I0513 23:50:32.805611 2552 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.805861 kubelet[2552]: I0513 23:50:32.805619 2552 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84930ac2-c2b2-4a58-a8f2-948cf6a63376-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 23:50:32.930397 systemd[1]: Removed slice kubepods-besteffort-pod0951d21a_8460_4eb7_8698_25535eaa5485.slice - libcontainer container kubepods-besteffort-pod0951d21a_8460_4eb7_8698_25535eaa5485.slice. May 13 23:50:32.942375 systemd[1]: Removed slice kubepods-burstable-pod84930ac2_c2b2_4a58_a8f2_948cf6a63376.slice - libcontainer container kubepods-burstable-pod84930ac2_c2b2_4a58_a8f2_948cf6a63376.slice. May 13 23:50:32.942468 systemd[1]: kubepods-burstable-pod84930ac2_c2b2_4a58_a8f2_948cf6a63376.slice: Consumed 7.520s CPU time, 127M memory peak, 1.5M read from disk, 16.1M written to disk. May 13 23:50:33.499910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dfc8861b447ba84ab1955b9bab288f1ba6e85f5c504916a4ab0ffb1ee677213-shm.mount: Deactivated successfully. May 13 23:50:33.500025 systemd[1]: var-lib-kubelet-pods-0951d21a\x2d8460\x2d4eb7\x2d8698\x2d25535eaa5485-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6gj6.mount: Deactivated successfully. May 13 23:50:33.500083 systemd[1]: var-lib-kubelet-pods-84930ac2\x2dc2b2\x2d4a58\x2da8f2\x2d948cf6a63376-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnhr5s.mount: Deactivated successfully. May 13 23:50:33.500139 systemd[1]: var-lib-kubelet-pods-84930ac2\x2dc2b2\x2d4a58\x2da8f2\x2d948cf6a63376-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:50:33.500190 systemd[1]: var-lib-kubelet-pods-84930ac2\x2dc2b2\x2d4a58\x2da8f2\x2d948cf6a63376-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:50:33.952226 containerd[1461]: time="2025-05-13T23:50:33.952100557Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1747180232 nanos:538563162}" May 13 23:50:34.401047 kubelet[2552]: I0513 23:50:34.400997 2552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0951d21a-8460-4eb7-8698-25535eaa5485" path="/var/lib/kubelet/pods/0951d21a-8460-4eb7-8698-25535eaa5485/volumes" May 13 23:50:34.401500 sshd[4156]: Connection closed by 10.0.0.1 port 36138 May 13 23:50:34.402186 kubelet[2552]: I0513 23:50:34.401852 2552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" path="/var/lib/kubelet/pods/84930ac2-c2b2-4a58-a8f2-948cf6a63376/volumes" May 13 23:50:34.401984 sshd-session[4153]: pam_unix(sshd:session): session closed for user core May 13 23:50:34.419921 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:36138.service: Deactivated successfully. May 13 23:50:34.421615 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:50:34.421841 systemd[1]: session-22.scope: Consumed 1.695s CPU time, 30.1M memory peak. May 13 23:50:34.422412 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. May 13 23:50:34.424633 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). May 13 23:50:34.426789 systemd-logind[1450]: Removed session 22. May 13 23:50:34.475576 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:34.477018 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:34.482319 systemd-logind[1450]: New session 23 of user core. May 13 23:50:34.494508 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:50:35.300299 sshd[4310]: Connection closed by 10.0.0.1 port 38168 May 13 23:50:35.299470 sshd-session[4307]: pam_unix(sshd:session): session closed for user core May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309020 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="mount-bpf-fs" May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309052 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="clean-cilium-state" May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309060 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0951d21a-8460-4eb7-8698-25535eaa5485" containerName="cilium-operator" May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309066 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="cilium-agent" May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309071 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="mount-cgroup" May 13 23:50:35.309066 kubelet[2552]: E0513 23:50:35.309077 2552 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="apply-sysctl-overwrites" May 13 23:50:35.309388 kubelet[2552]: I0513 23:50:35.309103 2552 memory_manager.go:354] "RemoveStaleState removing state" podUID="0951d21a-8460-4eb7-8698-25535eaa5485" containerName="cilium-operator" May 13 23:50:35.309388 kubelet[2552]: I0513 23:50:35.309109 2552 memory_manager.go:354] "RemoveStaleState removing state" podUID="84930ac2-c2b2-4a58-a8f2-948cf6a63376" containerName="cilium-agent" May 13 23:50:35.318970 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:38168.service: Deactivated successfully. May 13 23:50:35.324125 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:50:35.334332 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. May 13 23:50:35.341630 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:38174.service - OpenSSH per-connection server daemon (10.0.0.1:38174). May 13 23:50:35.342559 systemd-logind[1450]: Removed session 23. May 13 23:50:35.353003 systemd[1]: Created slice kubepods-burstable-pod39f12067_32f0_455b_9231_28646bd2f44e.slice - libcontainer container kubepods-burstable-pod39f12067_32f0_455b_9231_28646bd2f44e.slice. May 13 23:50:35.397150 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 38174 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:35.398604 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:35.402403 systemd-logind[1450]: New session 24 of user core. May 13 23:50:35.414511 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:50:35.422693 kubelet[2552]: I0513 23:50:35.422653 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39f12067-32f0-455b-9231-28646bd2f44e-cilium-ipsec-secrets\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.422693 kubelet[2552]: I0513 23:50:35.422697 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-host-proc-sys-kernel\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422717 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-hostproc\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422735 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-cilium-run\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422752 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-host-proc-sys-net\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422775 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-xtables-lock\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422795 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-lib-modules\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423002 kubelet[2552]: I0513 23:50:35.422811 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39f12067-32f0-455b-9231-28646bd2f44e-clustermesh-secrets\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422830 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39f12067-32f0-455b-9231-28646bd2f44e-cilium-config-path\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422845 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39f12067-32f0-455b-9231-28646bd2f44e-hubble-tls\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422917 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpps2\" (UniqueName: \"kubernetes.io/projected/39f12067-32f0-455b-9231-28646bd2f44e-kube-api-access-vpps2\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422939 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-bpf-maps\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422957 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-cni-path\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423150 kubelet[2552]: I0513 23:50:35.422974 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-etc-cni-netd\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.423299 kubelet[2552]: I0513 23:50:35.422991 2552 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39f12067-32f0-455b-9231-28646bd2f44e-cilium-cgroup\") pod \"cilium-znfsw\" (UID: \"39f12067-32f0-455b-9231-28646bd2f44e\") " pod="kube-system/cilium-znfsw" May 13 23:50:35.454353 kubelet[2552]: E0513 23:50:35.454314 2552 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:50:35.464291 sshd[4324]: Connection closed by 10.0.0.1 port 38174 May 13 23:50:35.464572 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 13 23:50:35.482747 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:38174.service: Deactivated successfully. May 13 23:50:35.486006 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:50:35.487398 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. May 13 23:50:35.489033 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:38188.service - OpenSSH per-connection server daemon (10.0.0.1:38188). May 13 23:50:35.490010 systemd-logind[1450]: Removed session 24. May 13 23:50:35.541153 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 38188 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:50:35.543421 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:35.550421 systemd-logind[1450]: New session 25 of user core. May 13 23:50:35.566485 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:50:35.657932 containerd[1461]: time="2025-05-13T23:50:35.657551140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-znfsw,Uid:39f12067-32f0-455b-9231-28646bd2f44e,Namespace:kube-system,Attempt:0,}" May 13 23:50:35.678730 containerd[1461]: time="2025-05-13T23:50:35.678690529Z" level=info msg="connecting to shim e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:35.702488 systemd[1]: Started cri-containerd-e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93.scope - libcontainer container e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93. May 13 23:50:35.727559 containerd[1461]: time="2025-05-13T23:50:35.727512466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-znfsw,Uid:39f12067-32f0-455b-9231-28646bd2f44e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\"" May 13 23:50:35.730333 containerd[1461]: time="2025-05-13T23:50:35.730177571Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:50:35.736052 containerd[1461]: time="2025-05-13T23:50:35.736008711Z" level=info msg="Container ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:35.741310 containerd[1461]: time="2025-05-13T23:50:35.741257518Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\"" May 13 23:50:35.741924 containerd[1461]: time="2025-05-13T23:50:35.741899133Z" level=info msg="StartContainer for \"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\"" May 13 23:50:35.742755 containerd[1461]: time="2025-05-13T23:50:35.742729353Z" level=info msg="connecting to shim ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" protocol=ttrpc version=3 May 13 23:50:35.761454 systemd[1]: Started cri-containerd-ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8.scope - libcontainer container ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8. May 13 23:50:35.785958 containerd[1461]: time="2025-05-13T23:50:35.785919435Z" level=info msg="StartContainer for \"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\" returns successfully" May 13 23:50:35.820782 systemd[1]: cri-containerd-ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8.scope: Deactivated successfully. May 13 23:50:35.821993 containerd[1461]: time="2025-05-13T23:50:35.821935583Z" level=info msg="received exit event container_id:\"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\" id:\"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\" pid:4400 exited_at:{seconds:1747180235 nanos:821641416}" May 13 23:50:35.822188 containerd[1461]: time="2025-05-13T23:50:35.822158109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\" id:\"ee8b4e50a4a89e926efc5b9a2016b7da9784c4c7f0c008db55c5faec4f87e8b8\" pid:4400 exited_at:{seconds:1747180235 nanos:821641416}" May 13 23:50:36.679936 containerd[1461]: time="2025-05-13T23:50:36.679890385Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:50:36.691499 containerd[1461]: time="2025-05-13T23:50:36.691445696Z" level=info msg="Container a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:36.698096 containerd[1461]: time="2025-05-13T23:50:36.697995009Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\"" May 13 23:50:36.698962 containerd[1461]: time="2025-05-13T23:50:36.698940431Z" level=info msg="StartContainer for \"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\"" May 13 23:50:36.699745 containerd[1461]: time="2025-05-13T23:50:36.699719890Z" level=info msg="connecting to shim a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" protocol=ttrpc version=3 May 13 23:50:36.727497 systemd[1]: Started cri-containerd-a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1.scope - libcontainer container a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1. May 13 23:50:36.755364 containerd[1461]: time="2025-05-13T23:50:36.755324754Z" level=info msg="StartContainer for \"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\" returns successfully" May 13 23:50:36.764568 systemd[1]: cri-containerd-a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1.scope: Deactivated successfully. May 13 23:50:36.766162 containerd[1461]: time="2025-05-13T23:50:36.766127007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\" id:\"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\" pid:4445 exited_at:{seconds:1747180236 nanos:765771999}" May 13 23:50:36.766256 containerd[1461]: time="2025-05-13T23:50:36.766217370Z" level=info msg="received exit event container_id:\"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\" id:\"a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1\" pid:4445 exited_at:{seconds:1747180236 nanos:765771999}" May 13 23:50:36.781672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0658a5fc57a99b1a830b93817f2df4d885a5e07bbe048ef70de6721d75dc3c1-rootfs.mount: Deactivated successfully. May 13 23:50:37.670093 containerd[1461]: time="2025-05-13T23:50:37.670056187Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:50:37.702311 containerd[1461]: time="2025-05-13T23:50:37.701758871Z" level=info msg="Container ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:37.723805 containerd[1461]: time="2025-05-13T23:50:37.721880770Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\"" May 13 23:50:37.723805 containerd[1461]: time="2025-05-13T23:50:37.722517905Z" level=info msg="StartContainer for \"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\"" May 13 23:50:37.726510 containerd[1461]: time="2025-05-13T23:50:37.726404833Z" level=info msg="connecting to shim ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" protocol=ttrpc version=3 May 13 23:50:37.748457 systemd[1]: Started cri-containerd-ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d.scope - libcontainer container ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d. May 13 23:50:37.779837 containerd[1461]: time="2025-05-13T23:50:37.779705770Z" level=info msg="StartContainer for \"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\" returns successfully" May 13 23:50:37.781863 systemd[1]: cri-containerd-ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d.scope: Deactivated successfully. May 13 23:50:37.782350 containerd[1461]: time="2025-05-13T23:50:37.782203427Z" level=info msg="received exit event container_id:\"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\" id:\"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\" pid:4489 exited_at:{seconds:1747180237 nanos:781670815}" May 13 23:50:37.782350 containerd[1461]: time="2025-05-13T23:50:37.782301229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\" id:\"ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d\" pid:4489 exited_at:{seconds:1747180237 nanos:781670815}" May 13 23:50:37.802091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef919a1a5eca689b2d340cc9813255a7fabdb209354cb31b1caeb1869d894f0d-rootfs.mount: Deactivated successfully. May 13 23:50:38.675997 containerd[1461]: time="2025-05-13T23:50:38.675951770Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:50:38.685087 containerd[1461]: time="2025-05-13T23:50:38.684344396Z" level=info msg="Container 680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:38.694009 containerd[1461]: time="2025-05-13T23:50:38.693968650Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\"" May 13 23:50:38.694679 containerd[1461]: time="2025-05-13T23:50:38.694656185Z" level=info msg="StartContainer for \"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\"" May 13 23:50:38.695608 containerd[1461]: time="2025-05-13T23:50:38.695567365Z" level=info msg="connecting to shim 680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" protocol=ttrpc version=3 May 13 23:50:38.720483 systemd[1]: Started cri-containerd-680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8.scope - libcontainer container 680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8. May 13 23:50:38.744449 systemd[1]: cri-containerd-680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8.scope: Deactivated successfully. May 13 23:50:38.752667 containerd[1461]: time="2025-05-13T23:50:38.745777800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\" id:\"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\" pid:4528 exited_at:{seconds:1747180238 nanos:744693936}" May 13 23:50:38.811298 containerd[1461]: time="2025-05-13T23:50:38.811136172Z" level=info msg="received exit event container_id:\"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\" id:\"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\" pid:4528 exited_at:{seconds:1747180238 nanos:744693936}" May 13 23:50:38.818756 containerd[1461]: time="2025-05-13T23:50:38.818691660Z" level=info msg="StartContainer for \"680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8\" returns successfully" May 13 23:50:38.831734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-680e9c7e4b6141b16a93bd6ed871c918d4d5dc38fb3de6225b37c72b32ef75c8-rootfs.mount: Deactivated successfully. May 13 23:50:39.681544 containerd[1461]: time="2025-05-13T23:50:39.681500258Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:50:39.699484 containerd[1461]: time="2025-05-13T23:50:39.699028277Z" level=info msg="Container 02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:39.707598 containerd[1461]: time="2025-05-13T23:50:39.707549021Z" level=info msg="CreateContainer within sandbox \"e3b5a134ddf3669add02d7217a0c2c815b8941b91ada4f68271d3ab5f82d2e93\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\"" May 13 23:50:39.708667 containerd[1461]: time="2025-05-13T23:50:39.708350399Z" level=info msg="StartContainer for \"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\"" May 13 23:50:39.709417 containerd[1461]: time="2025-05-13T23:50:39.709380141Z" level=info msg="connecting to shim 02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38" address="unix:///run/containerd/s/f488606980ecd15d200f758174e2d48e8308d19688814f94b98fbc6d0dd676c6" protocol=ttrpc version=3 May 13 23:50:39.744483 systemd[1]: Started cri-containerd-02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38.scope - libcontainer container 02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38. May 13 23:50:39.777758 containerd[1461]: time="2025-05-13T23:50:39.777343090Z" level=info msg="StartContainer for \"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" returns successfully" May 13 23:50:39.856107 containerd[1461]: time="2025-05-13T23:50:39.856060951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"0b656c7f672d48fa0a887b9cf52cd1e50f501a373dfffdb2513d4094c7c1af6f\" pid:4596 exited_at:{seconds:1747180239 nanos:855738904}" May 13 23:50:40.118298 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 23:50:40.704267 kubelet[2552]: I0513 23:50:40.704185 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-znfsw" podStartSLOduration=5.704158637 podStartE2EDuration="5.704158637s" podCreationTimestamp="2025-05-13 23:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:50:40.704160597 +0000 UTC m=+80.411680982" watchObservedRunningTime="2025-05-13 23:50:40.704158637 +0000 UTC m=+80.411678902" May 13 23:50:41.947878 containerd[1461]: time="2025-05-13T23:50:41.947823831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"f32b7856bf6633cc9b4ea46e2e31c4e72358713307f1dff046bcaab759b3af92\" pid:4766 exit_status:1 exited_at:{seconds:1747180241 nanos:947517345}" May 13 23:50:43.144488 systemd-networkd[1405]: lxc_health: Link UP May 13 23:50:43.145456 systemd-networkd[1405]: lxc_health: Gained carrier May 13 23:50:44.084040 containerd[1461]: time="2025-05-13T23:50:44.083990360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"779bccf53c24028d6a2df2578d9893698035585d69f7448a35a294b0838fbcce\" pid:5133 exited_at:{seconds:1747180244 nanos:83680915}" May 13 23:50:45.024706 systemd-networkd[1405]: lxc_health: Gained IPv6LL May 13 23:50:46.344016 containerd[1461]: time="2025-05-13T23:50:46.343961224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"944fa14f50e737f7f9adeb94f904ff84fbf6dad0c1f6e5399484fe88a738ee30\" pid:5171 exited_at:{seconds:1747180246 nanos:343592457}" May 13 23:50:48.443934 containerd[1461]: time="2025-05-13T23:50:48.443872031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"eb993d6b7356e46d5b164fef1c4d5966727afe5c337c8da3ab3eeacb6dff49fc\" pid:5202 exited_at:{seconds:1747180248 nanos:443477864}" May 13 23:50:50.557818 containerd[1461]: time="2025-05-13T23:50:50.557663603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f3938b54f257b4ed0e62f0fcd5730c82cbf03a0f7de5387668a3a596e3bd38\" id:\"0d49ac15127dba032e75e273ab9c4bf1d3eb9027a393e8c96d5a385c4462183d\" pid:5225 exited_at:{seconds:1747180250 nanos:557352798}" May 13 23:50:50.565592 sshd[4337]: Connection closed by 10.0.0.1 port 38188 May 13 23:50:50.566341 sshd-session[4330]: pam_unix(sshd:session): session closed for user core May 13 23:50:50.570035 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:38188.service: Deactivated successfully. May 13 23:50:50.572428 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:50:50.573326 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. May 13 23:50:50.574324 systemd-logind[1450]: Removed session 25.