Sep 10 23:25:33.839046 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:25:33.839072 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Sep 10 22:05:18 -00 2025 Sep 10 23:25:33.839082 kernel: KASLR enabled Sep 10 23:25:33.839089 kernel: efi: EFI v2.7 by EDK II Sep 10 23:25:33.839095 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 10 23:25:33.839101 kernel: random: crng init done Sep 10 23:25:33.839108 kernel: secureboot: Secure boot disabled Sep 10 23:25:33.839114 kernel: ACPI: Early table checksum verification disabled Sep 10 23:25:33.839121 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 10 23:25:33.839129 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:25:33.839136 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839142 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839148 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839155 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839163 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839171 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839178 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839185 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839192 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:25:33.839198 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:25:33.839205 kernel: NUMA: Failed to initialise from firmware Sep 10 23:25:33.839225 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:25:33.839232 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 10 23:25:33.839238 kernel: Zone ranges: Sep 10 23:25:33.839245 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:25:33.839253 kernel: DMA32 empty Sep 10 23:25:33.839260 kernel: Normal empty Sep 10 23:25:33.839266 kernel: Movable zone start for each node Sep 10 23:25:33.839272 kernel: Early memory node ranges Sep 10 23:25:33.839279 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 10 23:25:33.839286 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 10 23:25:33.839293 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 10 23:25:33.839299 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 10 23:25:33.839306 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 10 23:25:33.839313 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:25:33.839319 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:25:33.839326 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:25:33.839334 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:25:33.839340 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:25:33.839347 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:25:33.839357 kernel: psci: probing for conduit method from ACPI. Sep 10 23:25:33.839364 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:25:33.839371 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:25:33.839379 kernel: psci: Trusted OS migration not required Sep 10 23:25:33.839386 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:25:33.839393 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:25:33.839400 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 10 23:25:33.839407 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 10 23:25:33.839414 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:25:33.839421 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:25:33.839428 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:25:33.839435 kernel: CPU features: detected: Hardware dirty bit management Sep 10 23:25:33.839442 kernel: CPU features: detected: Spectre-v4 Sep 10 23:25:33.839450 kernel: CPU features: detected: Spectre-BHB Sep 10 23:25:33.839457 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:25:33.839464 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:25:33.839471 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:25:33.839478 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:25:33.839485 kernel: alternatives: applying boot alternatives Sep 10 23:25:33.839493 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=812c036cb680f79e5ca620d89a6ff10a489f95d8e789d774dfb3714b0f5aa257 Sep 10 23:25:33.839513 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:25:33.839520 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:25:33.839541 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:25:33.839548 kernel: Fallback order for Node 0: 0 Sep 10 23:25:33.839557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 10 23:25:33.839564 kernel: Policy zone: DMA Sep 10 23:25:33.839571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:25:33.839577 kernel: software IO TLB: area num 4. Sep 10 23:25:33.839584 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 10 23:25:33.839591 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Sep 10 23:25:33.839598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:25:33.839605 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:25:33.839612 kernel: rcu: RCU event tracing is enabled. Sep 10 23:25:33.839619 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:25:33.839626 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:25:33.839633 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:25:33.839641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:25:33.839648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:25:33.839655 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:25:33.839662 kernel: GICv3: 256 SPIs implemented Sep 10 23:25:33.839669 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:25:33.839675 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:25:33.839682 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:25:33.839689 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:25:33.839696 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:25:33.839703 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:25:33.839710 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:25:33.839718 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 10 23:25:33.839725 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 10 23:25:33.839732 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:25:33.839739 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:25:33.839746 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:25:33.839753 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:25:33.839760 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:25:33.839767 kernel: arm-pv: using stolen time PV Sep 10 23:25:33.839774 kernel: Console: colour dummy device 80x25 Sep 10 23:25:33.839782 kernel: ACPI: Core revision 20230628 Sep 10 23:25:33.839789 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:25:33.839798 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:25:33.839805 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 23:25:33.839812 kernel: landlock: Up and running. Sep 10 23:25:33.839819 kernel: SELinux: Initializing. Sep 10 23:25:33.839826 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:25:33.839834 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:25:33.839841 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:25:33.839848 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:25:33.839855 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:25:33.839864 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:25:33.839871 kernel: Platform MSI: ITS@0x8080000 domain created Sep 10 23:25:33.839878 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 10 23:25:33.839885 kernel: Remapping and enabling EFI services. Sep 10 23:25:33.839892 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:25:33.839900 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:25:33.839907 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:25:33.839914 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 10 23:25:33.839921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:25:33.839930 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:25:33.839937 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:25:33.839949 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:25:33.839958 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 10 23:25:33.839965 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:25:33.839972 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:25:33.839980 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:25:33.839987 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:25:33.839995 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 10 23:25:33.840004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:25:33.840011 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:25:33.840018 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:25:33.840025 kernel: SMP: Total of 4 processors activated. Sep 10 23:25:33.840033 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:25:33.840040 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:25:33.840048 kernel: CPU features: detected: Common not Private translations Sep 10 23:25:33.840055 kernel: CPU features: detected: CRC32 instructions Sep 10 23:25:33.840064 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:25:33.840071 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:25:33.840079 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:25:33.840086 kernel: CPU features: detected: Privileged Access Never Sep 10 23:25:33.840099 kernel: CPU features: detected: RAS Extension Support Sep 10 23:25:33.840106 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:25:33.840113 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:25:33.840121 kernel: alternatives: applying system-wide alternatives Sep 10 23:25:33.840128 kernel: devtmpfs: initialized Sep 10 23:25:33.840137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:25:33.840145 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:25:33.840153 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:25:33.840160 kernel: SMBIOS 3.0.0 present. Sep 10 23:25:33.840167 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:25:33.840175 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:25:33.840182 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:25:33.840190 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:25:33.840198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:25:33.840206 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:25:33.840214 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 10 23:25:33.840221 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:25:33.840229 kernel: cpuidle: using governor menu Sep 10 23:25:33.840236 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:25:33.840243 kernel: ASID allocator initialised with 32768 entries Sep 10 23:25:33.840251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:25:33.840258 kernel: Serial: AMBA PL011 UART driver Sep 10 23:25:33.840266 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:25:33.840274 kernel: Modules: 0 pages in range for non-PLT usage Sep 10 23:25:33.840282 kernel: Modules: 509248 pages in range for PLT usage Sep 10 23:25:33.840289 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:25:33.840297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:25:33.840304 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:25:33.840311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:25:33.840319 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:25:33.840326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:25:33.840334 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:25:33.840343 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:25:33.840350 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:25:33.840358 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:25:33.840365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:25:33.840372 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:25:33.840380 kernel: ACPI: Interpreter enabled Sep 10 23:25:33.840387 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:25:33.840394 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:25:33.840402 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:25:33.840410 kernel: printk: console [ttyAMA0] enabled Sep 10 23:25:33.840419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:25:33.840591 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:25:33.840670 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:25:33.840738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:25:33.840802 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:25:33.840867 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:25:33.840877 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:25:33.840887 kernel: PCI host bridge to bus 0000:00 Sep 10 23:25:33.840959 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:25:33.841022 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:25:33.841081 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:25:33.841139 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:25:33.841219 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 10 23:25:33.841302 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 23:25:33.841373 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 10 23:25:33.841439 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 10 23:25:33.841516 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:25:33.841609 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:25:33.841677 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 10 23:25:33.841744 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 10 23:25:33.841811 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:25:33.841870 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:25:33.841928 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:25:33.841938 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:25:33.841946 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:25:33.841954 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:25:33.841961 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:25:33.841969 kernel: iommu: Default domain type: Translated Sep 10 23:25:33.841979 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:25:33.841986 kernel: efivars: Registered efivars operations Sep 10 23:25:33.841994 kernel: vgaarb: loaded Sep 10 23:25:33.842001 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:25:33.842009 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:25:33.842017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:25:33.842024 kernel: pnp: PnP ACPI init Sep 10 23:25:33.842098 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:25:33.842111 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:25:33.842119 kernel: NET: Registered PF_INET protocol family Sep 10 23:25:33.842127 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:25:33.842134 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:25:33.842142 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:25:33.842149 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:25:33.842157 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:25:33.842165 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:25:33.842172 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:25:33.842181 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:25:33.842189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:25:33.842197 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:25:33.842205 kernel: kvm [1]: HYP mode not available Sep 10 23:25:33.842212 kernel: Initialise system trusted keyrings Sep 10 23:25:33.842220 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:25:33.842228 kernel: Key type asymmetric registered Sep 10 23:25:33.842235 kernel: Asymmetric key parser 'x509' registered Sep 10 23:25:33.842243 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 23:25:33.842252 kernel: io scheduler mq-deadline registered Sep 10 23:25:33.842259 kernel: io scheduler kyber registered Sep 10 23:25:33.842267 kernel: io scheduler bfq registered Sep 10 23:25:33.842274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:25:33.842282 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:25:33.842290 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:25:33.842357 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:25:33.842367 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:25:33.842375 kernel: thunder_xcv, ver 1.0 Sep 10 23:25:33.842383 kernel: thunder_bgx, ver 1.0 Sep 10 23:25:33.842392 kernel: nicpf, ver 1.0 Sep 10 23:25:33.842400 kernel: nicvf, ver 1.0 Sep 10 23:25:33.842476 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:25:33.842571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:25:33 UTC (1757546733) Sep 10 23:25:33.842582 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:25:33.842590 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 10 23:25:33.842598 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 10 23:25:33.842608 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:25:33.842616 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:25:33.842623 kernel: Segment Routing with IPv6 Sep 10 23:25:33.842631 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:25:33.842639 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:25:33.842646 kernel: Key type dns_resolver registered Sep 10 23:25:33.842653 kernel: registered taskstats version 1 Sep 10 23:25:33.842661 kernel: Loading compiled-in X.509 certificates Sep 10 23:25:33.842669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: d7b4405ae069a339ad721bbd0dc0977a88602ca7' Sep 10 23:25:33.842677 kernel: Key type .fscrypt registered Sep 10 23:25:33.842686 kernel: Key type fscrypt-provisioning registered Sep 10 23:25:33.842694 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:25:33.842701 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:25:33.842709 kernel: ima: No architecture policies found Sep 10 23:25:33.842716 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:25:33.842724 kernel: clk: Disabling unused clocks Sep 10 23:25:33.842731 kernel: Freeing unused kernel memory: 38400K Sep 10 23:25:33.842739 kernel: Run /init as init process Sep 10 23:25:33.842748 kernel: with arguments: Sep 10 23:25:33.842755 kernel: /init Sep 10 23:25:33.842763 kernel: with environment: Sep 10 23:25:33.842770 kernel: HOME=/ Sep 10 23:25:33.842778 kernel: TERM=linux Sep 10 23:25:33.842785 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:25:33.842793 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:25:33.842804 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:25:33.842814 systemd[1]: Detected virtualization kvm. Sep 10 23:25:33.842821 systemd[1]: Detected architecture arm64. Sep 10 23:25:33.842829 systemd[1]: Running in initrd. Sep 10 23:25:33.842836 systemd[1]: No hostname configured, using default hostname. Sep 10 23:25:33.842844 systemd[1]: Hostname set to . Sep 10 23:25:33.842852 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:25:33.842859 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:25:33.842867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:25:33.842877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:25:33.842885 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:25:33.842893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:25:33.842901 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:25:33.842910 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:25:33.842919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:25:33.842927 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:25:33.842937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:25:33.842945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:25:33.842952 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:25:33.842960 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:25:33.842968 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:25:33.842976 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:25:33.842983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:25:33.842991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:25:33.842999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:25:33.843008 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:25:33.843016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:25:33.843025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:25:33.843033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:25:33.843041 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:25:33.843048 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:25:33.843056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:25:33.843064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:25:33.843074 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:25:33.843082 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:25:33.843090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:25:33.843098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:25:33.843105 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:25:33.843113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:25:33.843123 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:25:33.843149 systemd-journald[239]: Collecting audit messages is disabled. Sep 10 23:25:33.843169 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:25:33.843179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:25:33.843187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:25:33.843196 systemd-journald[239]: Journal started Sep 10 23:25:33.843214 systemd-journald[239]: Runtime Journal (/run/log/journal/2b3127ad505f47368cc729f77cc8c9bf) is 5.9M, max 47.3M, 41.4M free. Sep 10 23:25:33.835338 systemd-modules-load[241]: Inserted module 'overlay' Sep 10 23:25:33.844855 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:25:33.847546 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:25:33.848160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:25:33.850603 kernel: Bridge firewalling registered Sep 10 23:25:33.850310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:25:33.850484 systemd-modules-load[241]: Inserted module 'br_netfilter' Sep 10 23:25:33.853788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:25:33.855993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:25:33.858742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:25:33.864485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:25:33.870637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:25:33.872417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:25:33.881724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:25:33.882781 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:25:33.885231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:25:33.899828 dracut-cmdline[283]: dracut-dracut-053 Sep 10 23:25:33.902369 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=812c036cb680f79e5ca620d89a6ff10a489f95d8e789d774dfb3714b0f5aa257 Sep 10 23:25:33.910451 systemd-resolved[279]: Positive Trust Anchors: Sep 10 23:25:33.910470 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:25:33.910509 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:25:33.915188 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 10 23:25:33.916200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:25:33.918777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:25:33.977538 kernel: SCSI subsystem initialized Sep 10 23:25:33.980551 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:25:33.988575 kernel: iscsi: registered transport (tcp) Sep 10 23:25:34.001923 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:25:34.001998 kernel: QLogic iSCSI HBA Driver Sep 10 23:25:34.047348 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:25:34.062739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:25:34.079153 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:25:34.079228 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:25:34.079245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 23:25:34.127557 kernel: raid6: neonx8 gen() 15733 MB/s Sep 10 23:25:34.142551 kernel: raid6: neonx4 gen() 14695 MB/s Sep 10 23:25:34.159537 kernel: raid6: neonx2 gen() 13155 MB/s Sep 10 23:25:34.176537 kernel: raid6: neonx1 gen() 10470 MB/s Sep 10 23:25:34.193544 kernel: raid6: int64x8 gen() 6755 MB/s Sep 10 23:25:34.210542 kernel: raid6: int64x4 gen() 7321 MB/s Sep 10 23:25:34.227541 kernel: raid6: int64x2 gen() 6079 MB/s Sep 10 23:25:34.244543 kernel: raid6: int64x1 gen() 5040 MB/s Sep 10 23:25:34.244574 kernel: raid6: using algorithm neonx8 gen() 15733 MB/s Sep 10 23:25:34.261541 kernel: raid6: .... xor() 11776 MB/s, rmw enabled Sep 10 23:25:34.261590 kernel: raid6: using neon recovery algorithm Sep 10 23:25:34.266537 kernel: xor: measuring software checksum speed Sep 10 23:25:34.266559 kernel: 8regs : 21613 MB/sec Sep 10 23:25:34.267553 kernel: 32regs : 21699 MB/sec Sep 10 23:25:34.267566 kernel: arm64_neon : 27898 MB/sec Sep 10 23:25:34.267585 kernel: xor: using function: arm64_neon (27898 MB/sec) Sep 10 23:25:34.317573 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:25:34.327746 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:25:34.339797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:25:34.352846 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 10 23:25:34.356519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:25:34.364701 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:25:34.376124 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 10 23:25:34.403419 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:25:34.413749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:25:34.455556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:25:34.466855 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:25:34.477380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:25:34.479259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:25:34.480647 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:25:34.482638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:25:34.493784 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:25:34.504087 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:25:34.516613 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:25:34.516782 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:25:34.524920 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:25:34.524974 kernel: GPT:9289727 != 19775487 Sep 10 23:25:34.524991 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:25:34.526911 kernel: GPT:9289727 != 19775487 Sep 10 23:25:34.526965 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:25:34.526978 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:25:34.530855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:25:34.530986 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:25:34.535460 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:25:34.536451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:25:34.536684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:25:34.539144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:25:34.547787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:25:34.563548 kernel: BTRFS: device fsid fd58f7db-5430-4b8c-ae33-665ce7287c74 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (519) Sep 10 23:25:34.565578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (516) Sep 10 23:25:34.569320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:25:34.570637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:25:34.588224 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:25:34.594553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:25:34.595558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:25:34.603962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:25:34.619737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:25:34.621895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:25:34.627432 disk-uuid[552]: Primary Header is updated. Sep 10 23:25:34.627432 disk-uuid[552]: Secondary Entries is updated. Sep 10 23:25:34.627432 disk-uuid[552]: Secondary Header is updated. Sep 10 23:25:34.633556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:25:34.644323 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:25:35.642549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:25:35.643265 disk-uuid[553]: The operation has completed successfully. Sep 10 23:25:35.665313 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:25:35.665413 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:25:35.707756 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:25:35.710796 sh[574]: Success Sep 10 23:25:35.721582 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 10 23:25:35.762441 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:25:35.764240 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:25:35.766586 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:25:35.777188 kernel: BTRFS info (device dm-0): first mount of filesystem fd58f7db-5430-4b8c-ae33-665ce7287c74 Sep 10 23:25:35.777242 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:25:35.777253 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 23:25:35.777263 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:25:35.777786 kernel: BTRFS info (device dm-0): using free space tree Sep 10 23:25:35.782175 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:25:35.783414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:25:35.794761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:25:35.796212 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:25:35.810713 kernel: BTRFS info (device vda6): first mount of filesystem 42af1272-c999-4ec5-9130-292f2318261d Sep 10 23:25:35.810770 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:25:35.810780 kernel: BTRFS info (device vda6): using free space tree Sep 10 23:25:35.813548 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 23:25:35.817583 kernel: BTRFS info (device vda6): last unmount of filesystem 42af1272-c999-4ec5-9130-292f2318261d Sep 10 23:25:35.820679 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:25:35.829764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:25:35.887578 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:25:35.898773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:25:35.902649 ignition[663]: Ignition 2.20.0 Sep 10 23:25:35.902659 ignition[663]: Stage: fetch-offline Sep 10 23:25:35.902697 ignition[663]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:35.902705 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:35.902858 ignition[663]: parsed url from cmdline: "" Sep 10 23:25:35.902861 ignition[663]: no config URL provided Sep 10 23:25:35.902865 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:25:35.902872 ignition[663]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:25:35.902896 ignition[663]: op(1): [started] loading QEMU firmware config module Sep 10 23:25:35.902900 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:25:35.909037 ignition[663]: op(1): [finished] loading QEMU firmware config module Sep 10 23:25:35.909060 ignition[663]: QEMU firmware config was not found. Ignoring... Sep 10 23:25:35.924294 systemd-networkd[762]: lo: Link UP Sep 10 23:25:35.924307 systemd-networkd[762]: lo: Gained carrier Sep 10 23:25:35.925128 systemd-networkd[762]: Enumeration completed Sep 10 23:25:35.925396 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:25:35.925700 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:25:35.925704 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:25:35.927862 systemd-networkd[762]: eth0: Link UP Sep 10 23:25:35.927865 systemd-networkd[762]: eth0: Gained carrier Sep 10 23:25:35.927875 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:25:35.928549 systemd[1]: Reached target network.target - Network. Sep 10 23:25:35.938575 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:25:35.959381 ignition[663]: parsing config with SHA512: c65dc05e5af77528ec4b728ca049e9aa491e100aa296331bef6fc1ec273160f2c619872a4c8ad3cd8fdda4535b612fd3be386accdbb4dcca713d6cdd42616348 Sep 10 23:25:35.965675 unknown[663]: fetched base config from "system" Sep 10 23:25:35.965684 unknown[663]: fetched user config from "qemu" Sep 10 23:25:35.966128 ignition[663]: fetch-offline: fetch-offline passed Sep 10 23:25:35.966201 ignition[663]: Ignition finished successfully Sep 10 23:25:35.968605 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:25:35.969681 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:25:35.976737 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:25:35.989670 ignition[769]: Ignition 2.20.0 Sep 10 23:25:35.989681 ignition[769]: Stage: kargs Sep 10 23:25:35.989847 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:35.989857 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:35.990769 ignition[769]: kargs: kargs passed Sep 10 23:25:35.990820 ignition[769]: Ignition finished successfully Sep 10 23:25:35.992939 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:25:36.000716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:25:36.011293 ignition[778]: Ignition 2.20.0 Sep 10 23:25:36.011304 ignition[778]: Stage: disks Sep 10 23:25:36.011473 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:36.011483 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:36.012444 ignition[778]: disks: disks passed Sep 10 23:25:36.013858 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:25:36.012505 ignition[778]: Ignition finished successfully Sep 10 23:25:36.014895 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:25:36.016043 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:25:36.017499 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:25:36.018737 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:25:36.020205 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:25:36.032735 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:25:36.045497 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 23:25:36.047602 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.56 Sep 10 23:25:36.047617 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 10 23:25:36.050897 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:25:36.061706 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:25:36.105550 kernel: EXT4-fs (vda9): mounted filesystem a23ff18d-cc1e-4b34-900c-13c0a3e995c4 r/w with ordered data mode. Quota mode: none. Sep 10 23:25:36.105849 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:25:36.106964 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:25:36.121675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:25:36.123555 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:25:36.124559 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:25:36.124617 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:25:36.124641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:25:36.132619 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (798) Sep 10 23:25:36.131093 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:25:36.132754 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:25:36.136956 kernel: BTRFS info (device vda6): first mount of filesystem 42af1272-c999-4ec5-9130-292f2318261d Sep 10 23:25:36.136983 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:25:36.137001 kernel: BTRFS info (device vda6): using free space tree Sep 10 23:25:36.140542 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 23:25:36.141348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:25:36.171177 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:25:36.175440 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:25:36.178760 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:25:36.182664 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:25:36.250605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:25:36.261626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:25:36.263097 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:25:36.268573 kernel: BTRFS info (device vda6): last unmount of filesystem 42af1272-c999-4ec5-9130-292f2318261d Sep 10 23:25:36.283431 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:25:36.288401 ignition[911]: INFO : Ignition 2.20.0 Sep 10 23:25:36.288401 ignition[911]: INFO : Stage: mount Sep 10 23:25:36.290446 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:36.290446 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:36.290446 ignition[911]: INFO : mount: mount passed Sep 10 23:25:36.290446 ignition[911]: INFO : Ignition finished successfully Sep 10 23:25:36.291503 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:25:36.298655 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:25:36.902773 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:25:36.913731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:25:36.920220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (925) Sep 10 23:25:36.920265 kernel: BTRFS info (device vda6): first mount of filesystem 42af1272-c999-4ec5-9130-292f2318261d Sep 10 23:25:36.920276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:25:36.920993 kernel: BTRFS info (device vda6): using free space tree Sep 10 23:25:36.923536 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 23:25:36.924658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:25:36.939649 ignition[942]: INFO : Ignition 2.20.0 Sep 10 23:25:36.939649 ignition[942]: INFO : Stage: files Sep 10 23:25:36.940961 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:36.940961 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:36.940961 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:25:36.943747 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:25:36.943747 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:25:36.946448 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:25:36.947627 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:25:36.947627 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:25:36.946961 unknown[942]: wrote ssh authorized keys file for user: core Sep 10 23:25:36.950479 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:25:36.950479 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 23:25:37.082914 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:25:37.348247 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:25:37.348247 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:25:37.351316 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 23:25:37.615348 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 23:25:37.812560 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:25:37.812560 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:25:37.815624 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 23:25:37.870679 systemd-networkd[762]: eth0: Gained IPv6LL Sep 10 23:25:38.397760 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 23:25:38.878053 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:25:38.878053 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 23:25:38.881534 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:25:38.896934 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:25:38.900359 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:25:38.901722 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:25:38.901722 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:25:38.901722 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:25:38.901722 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:25:38.901722 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:25:38.901722 ignition[942]: INFO : files: files passed Sep 10 23:25:38.901722 ignition[942]: INFO : Ignition finished successfully Sep 10 23:25:38.902500 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:25:38.914732 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:25:38.917062 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:25:38.918142 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:25:38.918231 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:25:38.925757 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:25:38.928795 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:25:38.928795 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:25:38.931794 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:25:38.931440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:25:38.932863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:25:38.942763 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:25:38.965938 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:25:38.966065 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:25:38.967858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:25:38.969342 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:25:38.970897 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:25:38.971775 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:25:38.990451 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:25:39.001730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:25:39.009834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:25:39.010873 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:25:39.012545 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:25:39.014010 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:25:39.014166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:25:39.016238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:25:39.018204 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:25:39.019903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:25:39.021456 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:25:39.023103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:25:39.024618 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:25:39.026148 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:25:39.027640 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:25:39.029164 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:25:39.030472 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:25:39.031757 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:25:39.031890 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:25:39.033941 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:25:39.035597 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:25:39.037169 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:25:39.040602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:25:39.041632 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:25:39.041763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:25:39.044091 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:25:39.044243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:25:39.045742 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:25:39.046994 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:25:39.051594 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:25:39.052686 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:25:39.054415 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:25:39.055996 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:25:39.056083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:25:39.057397 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:25:39.057472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:25:39.058819 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:25:39.058935 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:25:39.060433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:25:39.060558 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:25:39.079739 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:25:39.080582 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:25:39.080724 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:25:39.085769 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:25:39.086420 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:25:39.086573 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:25:39.088001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:25:39.088108 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:25:39.094376 ignition[998]: INFO : Ignition 2.20.0 Sep 10 23:25:39.094376 ignition[998]: INFO : Stage: umount Sep 10 23:25:39.100039 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:25:39.100039 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:25:39.100039 ignition[998]: INFO : umount: umount passed Sep 10 23:25:39.100039 ignition[998]: INFO : Ignition finished successfully Sep 10 23:25:39.099264 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:25:39.100214 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:25:39.104165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:25:39.105569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:25:39.109488 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:25:39.110038 systemd[1]: Stopped target network.target - Network. Sep 10 23:25:39.112241 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:25:39.112314 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:25:39.113814 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:25:39.113861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:25:39.115366 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:25:39.115411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:25:39.116978 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:25:39.117063 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:25:39.118750 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:25:39.119729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:25:39.127804 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:25:39.128883 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:25:39.135068 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:25:39.135399 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:25:39.135440 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:25:39.138762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:25:39.138994 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:25:39.139107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:25:39.142102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:25:39.142775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:25:39.142827 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:25:39.153653 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:25:39.154338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:25:39.154407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:25:39.156090 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:25:39.156136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:25:39.158491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:25:39.158549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:25:39.160255 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:25:39.163215 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:25:39.170298 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:25:39.170429 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:25:39.181375 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:25:39.181580 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:25:39.183902 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:25:39.183945 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:25:39.185372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:25:39.185405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:25:39.187019 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:25:39.187122 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:25:39.189467 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:25:39.189540 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:25:39.192146 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:25:39.192200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:25:39.210039 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:25:39.210906 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:25:39.210971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:25:39.215965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:25:39.216029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:25:39.218757 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:25:39.218846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:25:39.220030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:25:39.220098 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:25:39.222291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:25:39.223493 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:25:39.223646 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:25:39.226609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:25:39.236088 systemd[1]: Switching root. Sep 10 23:25:39.271494 systemd-journald[239]: Journal stopped Sep 10 23:25:40.005175 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 10 23:25:40.005231 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:25:40.005243 kernel: SELinux: policy capability open_perms=1 Sep 10 23:25:40.005252 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:25:40.005261 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:25:40.005271 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:25:40.005280 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:25:40.005289 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:25:40.005298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:25:40.005308 kernel: audit: type=1403 audit(1757546739.429:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:25:40.005319 systemd[1]: Successfully loaded SELinux policy in 30.777ms. Sep 10 23:25:40.005339 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.604ms. Sep 10 23:25:40.005350 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:25:40.005361 systemd[1]: Detected virtualization kvm. Sep 10 23:25:40.005371 systemd[1]: Detected architecture arm64. Sep 10 23:25:40.005381 systemd[1]: Detected first boot. Sep 10 23:25:40.005391 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:25:40.005401 zram_generator::config[1044]: No configuration found. Sep 10 23:25:40.005413 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:25:40.005423 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:25:40.005438 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:25:40.005448 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:25:40.005458 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:25:40.005468 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:25:40.005487 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:25:40.005499 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:25:40.005510 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:25:40.005520 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:25:40.005543 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:25:40.005554 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:25:40.005564 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:25:40.005574 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:25:40.005584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:25:40.005595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:25:40.005605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:25:40.005617 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:25:40.005628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:25:40.005638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:25:40.005649 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:25:40.005658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:25:40.005669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:25:40.005681 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:25:40.005693 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:25:40.005703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:25:40.005713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:25:40.005723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:25:40.005733 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:25:40.005743 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:25:40.005753 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:25:40.005763 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:25:40.005773 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:25:40.005784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:25:40.005794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:25:40.005805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:25:40.005815 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:25:40.005825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:25:40.005834 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:25:40.005844 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:25:40.005854 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:25:40.005864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:25:40.005876 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:25:40.005886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:25:40.005897 systemd[1]: Reached target machines.target - Containers. Sep 10 23:25:40.005907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:25:40.005917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:25:40.005927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:25:40.005938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:25:40.005948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:25:40.005957 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:25:40.005969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:25:40.005979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:25:40.005989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:25:40.006000 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:25:40.006010 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:25:40.006020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:25:40.006030 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:25:40.006041 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:25:40.006052 kernel: loop: module loaded Sep 10 23:25:40.006062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:25:40.006072 kernel: fuse: init (API version 7.39) Sep 10 23:25:40.006082 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:25:40.006091 kernel: ACPI: bus type drm_connector registered Sep 10 23:25:40.006101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:25:40.006111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:25:40.006121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:25:40.006131 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:25:40.006143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:25:40.006153 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:25:40.006163 systemd[1]: Stopped verity-setup.service. Sep 10 23:25:40.006189 systemd-journald[1116]: Collecting audit messages is disabled. Sep 10 23:25:40.006211 systemd-journald[1116]: Journal started Sep 10 23:25:40.006231 systemd-journald[1116]: Runtime Journal (/run/log/journal/2b3127ad505f47368cc729f77cc8c9bf) is 5.9M, max 47.3M, 41.4M free. Sep 10 23:25:39.822443 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:25:39.835457 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:25:39.835857 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:25:40.010559 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:25:40.010195 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:25:40.011291 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:25:40.012269 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:25:40.013130 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:25:40.014119 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:25:40.015115 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:25:40.017563 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:25:40.018665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:25:40.019784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:25:40.019937 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:25:40.021136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:25:40.021293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:25:40.022436 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:25:40.022618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:25:40.023680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:25:40.023838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:25:40.025151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:25:40.025303 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:25:40.026411 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:25:40.026588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:25:40.027685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:25:40.028783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:25:40.030116 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:25:40.031374 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:25:40.043376 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:25:40.053713 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:25:40.055461 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:25:40.056370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:25:40.056412 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:25:40.058112 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:25:40.059990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:25:40.061768 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:25:40.062614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:25:40.063935 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:25:40.065517 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:25:40.066443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:25:40.070758 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:25:40.072285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:25:40.072687 systemd-journald[1116]: Time spent on flushing to /var/log/journal/2b3127ad505f47368cc729f77cc8c9bf is 23.619ms for 869 entries. Sep 10 23:25:40.072687 systemd-journald[1116]: System Journal (/var/log/journal/2b3127ad505f47368cc729f77cc8c9bf) is 8M, max 195.6M, 187.6M free. Sep 10 23:25:40.107671 systemd-journald[1116]: Received client request to flush runtime journal. Sep 10 23:25:40.107730 kernel: loop0: detected capacity change from 0 to 207008 Sep 10 23:25:40.073648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:25:40.075379 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:25:40.079449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:25:40.084558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:25:40.085855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:25:40.086814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:25:40.088137 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:25:40.091427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:25:40.094006 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:25:40.100074 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:25:40.102586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 23:25:40.113256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:25:40.120797 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:25:40.120794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:25:40.126355 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 23:25:40.128360 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:25:40.137051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:25:40.144708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:25:40.158591 kernel: loop1: detected capacity change from 0 to 123192 Sep 10 23:25:40.163056 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 10 23:25:40.163075 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 10 23:25:40.169572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:25:40.197567 kernel: loop2: detected capacity change from 0 to 113512 Sep 10 23:25:40.234562 kernel: loop3: detected capacity change from 0 to 207008 Sep 10 23:25:40.240588 kernel: loop4: detected capacity change from 0 to 123192 Sep 10 23:25:40.245564 kernel: loop5: detected capacity change from 0 to 113512 Sep 10 23:25:40.248141 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:25:40.248845 (sd-merge)[1188]: Merged extensions into '/usr'. Sep 10 23:25:40.252123 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:25:40.252143 systemd[1]: Reloading... Sep 10 23:25:40.308972 zram_generator::config[1217]: No configuration found. Sep 10 23:25:40.327211 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:25:40.410904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:25:40.461096 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:25:40.461226 systemd[1]: Reloading finished in 208 ms. Sep 10 23:25:40.477262 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:25:40.478601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:25:40.493701 systemd[1]: Starting ensure-sysext.service... Sep 10 23:25:40.495282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:25:40.503560 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:25:40.507861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:25:40.509001 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:25:40.509017 systemd[1]: Reloading... Sep 10 23:25:40.510855 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:25:40.511310 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:25:40.512366 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:25:40.512706 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 10 23:25:40.512824 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 10 23:25:40.515289 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:25:40.515401 systemd-tmpfiles[1252]: Skipping /boot Sep 10 23:25:40.524034 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:25:40.524122 systemd-tmpfiles[1252]: Skipping /boot Sep 10 23:25:40.531416 systemd-udevd[1255]: Using default interface naming scheme 'v255'. Sep 10 23:25:40.555540 zram_generator::config[1282]: No configuration found. Sep 10 23:25:40.607559 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1302) Sep 10 23:25:40.676327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:25:40.740078 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:25:40.740421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:25:40.741791 systemd[1]: Reloading finished in 232 ms. Sep 10 23:25:40.753021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:25:40.772633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:25:40.791628 systemd[1]: Finished ensure-sysext.service. Sep 10 23:25:40.792707 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 23:25:40.819697 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:25:40.821947 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:25:40.823018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:25:40.824071 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 23:25:40.828462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:25:40.832644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:25:40.835856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:25:40.838353 lvm[1349]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 23:25:40.838678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:25:40.840760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:25:40.841755 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:25:40.842952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:25:40.845721 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:25:40.849709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:25:40.853348 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:25:40.859688 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:25:40.862777 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:25:40.867670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:25:40.873573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 23:25:40.876406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:25:40.876951 augenrules[1381]: No rules Sep 10 23:25:40.877575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:25:40.879117 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:25:40.879301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:25:40.880685 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:25:40.880833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:25:40.882191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:25:40.882335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:25:40.884158 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:25:40.884325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:25:40.887558 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:25:40.889033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:25:40.890979 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:25:40.900904 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:25:40.914682 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 23:25:40.915501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:25:40.915594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:25:40.916698 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:25:40.919020 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:25:40.920852 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 23:25:40.919864 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:25:40.920410 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:25:40.922018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:25:40.928348 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:25:40.950539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:25:40.959901 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 23:25:41.002175 systemd-networkd[1368]: lo: Link UP Sep 10 23:25:41.002186 systemd-networkd[1368]: lo: Gained carrier Sep 10 23:25:41.002650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:25:41.003785 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:25:41.005604 systemd-networkd[1368]: Enumeration completed Sep 10 23:25:41.005632 systemd-resolved[1371]: Positive Trust Anchors: Sep 10 23:25:41.005642 systemd-resolved[1371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:25:41.005674 systemd-resolved[1371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:25:41.005687 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:25:41.007131 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:25:41.007138 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:25:41.007567 systemd-networkd[1368]: eth0: Link UP Sep 10 23:25:41.007575 systemd-networkd[1368]: eth0: Gained carrier Sep 10 23:25:41.007588 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:25:41.011946 systemd-resolved[1371]: Defaulting to hostname 'linux'. Sep 10 23:25:41.017711 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:25:41.019751 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:25:41.020748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:25:41.021796 systemd[1]: Reached target network.target - Network. Sep 10 23:25:41.022549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:25:41.023464 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:25:41.024555 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:25:41.025602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:25:41.026653 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:25:41.027407 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:25:41.027722 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 10 23:25:41.028255 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:25:41.028297 systemd-timesyncd[1374]: Initial clock synchronization to Wed 2025-09-10 23:25:40.814089 UTC. Sep 10 23:25:41.028642 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:25:41.029634 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:25:41.030598 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:25:41.030625 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:25:41.031309 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:25:41.032769 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:25:41.034979 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:25:41.038038 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:25:41.039289 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:25:41.040393 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:25:41.043492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:25:41.044810 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:25:41.047592 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:25:41.048782 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:25:41.050254 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:25:41.051157 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:25:41.052011 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:25:41.052043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:25:41.053222 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:25:41.055161 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:25:41.056892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:25:41.058850 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:25:41.059792 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:25:41.062757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:25:41.063957 jq[1422]: false Sep 10 23:25:41.065612 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:25:41.067856 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:25:41.071950 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:25:41.076589 extend-filesystems[1423]: Found loop3 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found loop4 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found loop5 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda1 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda2 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda3 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found usr Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda4 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda6 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda7 Sep 10 23:25:41.076589 extend-filesystems[1423]: Found vda9 Sep 10 23:25:41.076589 extend-filesystems[1423]: Checking size of /dev/vda9 Sep 10 23:25:41.078176 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:25:41.080551 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:25:41.081000 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:25:41.093057 dbus-daemon[1421]: [system] SELinux support is enabled Sep 10 23:25:41.082729 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:25:41.088672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:25:41.092594 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:25:41.092799 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:25:41.093080 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:25:41.093240 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:25:41.094403 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:25:41.097710 extend-filesystems[1423]: Resized partition /dev/vda9 Sep 10 23:25:41.099041 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:25:41.099532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:25:41.103331 jq[1439]: true Sep 10 23:25:41.105594 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Sep 10 23:25:41.111736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1301) Sep 10 23:25:41.113367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:25:41.113410 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:25:41.113684 update_engine[1437]: I20250910 23:25:41.113451 1437 main.cc:92] Flatcar Update Engine starting Sep 10 23:25:41.114972 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:25:41.114992 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:25:41.117196 update_engine[1437]: I20250910 23:25:41.117121 1437 update_check_scheduler.cc:74] Next update check in 9m9s Sep 10 23:25:41.117242 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:25:41.121118 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:25:41.133177 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:25:41.134946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:25:41.147538 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:25:41.149780 tar[1444]: linux-arm64/LICENSE Sep 10 23:25:41.163870 jq[1450]: true Sep 10 23:25:41.165556 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:25:41.165556 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:25:41.165556 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:25:41.171660 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Sep 10 23:25:41.172283 tar[1444]: linux-arm64/helm Sep 10 23:25:41.167039 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:25:41.168011 systemd-logind[1435]: New seat seat0. Sep 10 23:25:41.171080 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:25:41.171265 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:25:41.181877 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:25:41.205563 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:25:41.207221 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:25:41.209349 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:25:41.213061 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:25:41.257447 containerd[1447]: time="2025-09-10T23:25:41.257356720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 10 23:25:41.285584 containerd[1447]: time="2025-09-10T23:25:41.285479920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287028920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287060600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287074960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287228240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287245920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287296160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287307120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287502840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287518000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287547800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 23:25:41.288546 containerd[1447]: time="2025-09-10T23:25:41.287564480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.287772800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.287968640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.288104920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.288118680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.288191800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 23:25:41.290749 containerd[1447]: time="2025-09-10T23:25:41.288232880Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:25:41.292197 containerd[1447]: time="2025-09-10T23:25:41.292171720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 23:25:41.292235 containerd[1447]: time="2025-09-10T23:25:41.292220600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 23:25:41.292268 containerd[1447]: time="2025-09-10T23:25:41.292240120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 23:25:41.292287 containerd[1447]: time="2025-09-10T23:25:41.292276000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 23:25:41.292330 containerd[1447]: time="2025-09-10T23:25:41.292312480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 23:25:41.292470 containerd[1447]: time="2025-09-10T23:25:41.292446800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 23:25:41.292818 containerd[1447]: time="2025-09-10T23:25:41.292799160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 23:25:41.292930 containerd[1447]: time="2025-09-10T23:25:41.292908520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 23:25:41.292967 containerd[1447]: time="2025-09-10T23:25:41.292932120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 23:25:41.292967 containerd[1447]: time="2025-09-10T23:25:41.292947680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 23:25:41.292967 containerd[1447]: time="2025-09-10T23:25:41.292963080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293017 containerd[1447]: time="2025-09-10T23:25:41.292975320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293017 containerd[1447]: time="2025-09-10T23:25:41.292987280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293017 containerd[1447]: time="2025-09-10T23:25:41.293004360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293070 containerd[1447]: time="2025-09-10T23:25:41.293019200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293070 containerd[1447]: time="2025-09-10T23:25:41.293033360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293070 containerd[1447]: time="2025-09-10T23:25:41.293045560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293070 containerd[1447]: time="2025-09-10T23:25:41.293057000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 23:25:41.293133 containerd[1447]: time="2025-09-10T23:25:41.293076240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293133 containerd[1447]: time="2025-09-10T23:25:41.293090120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293133 containerd[1447]: time="2025-09-10T23:25:41.293101640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293133 containerd[1447]: time="2025-09-10T23:25:41.293122640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293135280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293148440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293160720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293173120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293185600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293202 containerd[1447]: time="2025-09-10T23:25:41.293199720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293211280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293222840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293236160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293251120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293270400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293286080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293299 containerd[1447]: time="2025-09-10T23:25:41.293296360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 23:25:41.293600 containerd[1447]: time="2025-09-10T23:25:41.293471280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 23:25:41.293626 containerd[1447]: time="2025-09-10T23:25:41.293608600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 23:25:41.293626 containerd[1447]: time="2025-09-10T23:25:41.293620880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 23:25:41.293669 containerd[1447]: time="2025-09-10T23:25:41.293632880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 23:25:41.293669 containerd[1447]: time="2025-09-10T23:25:41.293642680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.293669 containerd[1447]: time="2025-09-10T23:25:41.293655200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 23:25:41.293719 containerd[1447]: time="2025-09-10T23:25:41.293672000Z" level=info msg="NRI interface is disabled by configuration." Sep 10 23:25:41.293719 containerd[1447]: time="2025-09-10T23:25:41.293685360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 23:25:41.294076 containerd[1447]: time="2025-09-10T23:25:41.294028400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 23:25:41.294217 containerd[1447]: time="2025-09-10T23:25:41.294081720Z" level=info msg="Connect containerd service" Sep 10 23:25:41.294217 containerd[1447]: time="2025-09-10T23:25:41.294112840Z" level=info msg="using legacy CRI server" Sep 10 23:25:41.294217 containerd[1447]: time="2025-09-10T23:25:41.294118880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.294344520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295041880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295302680Z" level=info msg="Start subscribing containerd event" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295359360Z" level=info msg="Start recovering state" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295423360Z" level=info msg="Start event monitor" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295434960Z" level=info msg="Start snapshots syncer" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295444520Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295452640Z" level=info msg="Start streaming server" Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295860760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295910400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:25:41.296530 containerd[1447]: time="2025-09-10T23:25:41.295961280Z" level=info msg="containerd successfully booted in 0.040132s" Sep 10 23:25:41.296060 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:25:41.457859 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:25:41.476824 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:25:41.485825 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:25:41.490965 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:25:41.491186 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:25:41.493813 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:25:41.504789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:25:41.507244 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:25:41.510907 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:25:41.513851 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:25:41.542554 tar[1444]: linux-arm64/README.md Sep 10 23:25:41.559577 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:25:42.606668 systemd-networkd[1368]: eth0: Gained IPv6LL Sep 10 23:25:42.609329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:25:42.610894 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:25:42.625869 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:25:42.628067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:25:42.630012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:25:42.642662 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:25:42.642847 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:25:42.646399 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:25:42.648812 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:25:43.154716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:25:43.156102 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:25:43.159883 (kubelet)[1533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:25:43.162461 systemd[1]: Startup finished in 540ms (kernel) + 5.754s (initrd) + 3.764s (userspace) = 10.058s. Sep 10 23:25:43.509789 kubelet[1533]: E0910 23:25:43.509653 1533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:25:43.512006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:25:43.512152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:25:43.512467 systemd[1]: kubelet.service: Consumed 740ms CPU time, 260M memory peak. Sep 10 23:25:46.155935 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:25:46.156989 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Sep 10 23:25:46.207888 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:46.209919 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:46.217915 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:25:46.224747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:25:46.226462 systemd-logind[1435]: New session 1 of user core. Sep 10 23:25:46.232905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:25:46.234890 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:25:46.240713 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:25:46.242845 systemd-logind[1435]: New session c1 of user core. Sep 10 23:25:46.334431 systemd[1551]: Queued start job for default target default.target. Sep 10 23:25:46.347416 systemd[1551]: Created slice app.slice - User Application Slice. Sep 10 23:25:46.347442 systemd[1551]: Reached target paths.target - Paths. Sep 10 23:25:46.347473 systemd[1551]: Reached target timers.target - Timers. Sep 10 23:25:46.348671 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:25:46.357152 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:25:46.357210 systemd[1551]: Reached target sockets.target - Sockets. Sep 10 23:25:46.357245 systemd[1551]: Reached target basic.target - Basic System. Sep 10 23:25:46.357272 systemd[1551]: Reached target default.target - Main User Target. Sep 10 23:25:46.357302 systemd[1551]: Startup finished in 109ms. Sep 10 23:25:46.357505 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:25:46.358850 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:25:46.418778 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:36046.service - OpenSSH per-connection server daemon (10.0.0.1:36046). Sep 10 23:25:46.456096 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 36046 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:46.457460 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:46.461579 systemd-logind[1435]: New session 2 of user core. Sep 10 23:25:46.469659 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:25:46.519560 sshd[1564]: Connection closed by 10.0.0.1 port 36046 Sep 10 23:25:46.519977 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Sep 10 23:25:46.528600 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:36046.service: Deactivated successfully. Sep 10 23:25:46.530987 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:25:46.532748 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:25:46.534445 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:36056.service - OpenSSH per-connection server daemon (10.0.0.1:36056). Sep 10 23:25:46.535927 systemd-logind[1435]: Removed session 2. Sep 10 23:25:46.576228 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 36056 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:46.576664 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:46.582035 systemd-logind[1435]: New session 3 of user core. Sep 10 23:25:46.591675 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:25:46.640485 sshd[1572]: Connection closed by 10.0.0.1 port 36056 Sep 10 23:25:46.640636 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Sep 10 23:25:46.652670 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:36056.service: Deactivated successfully. Sep 10 23:25:46.654154 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:25:46.655478 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:25:46.656572 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). Sep 10 23:25:46.657714 systemd-logind[1435]: Removed session 3. Sep 10 23:25:46.699917 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:46.701064 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:46.705015 systemd-logind[1435]: New session 4 of user core. Sep 10 23:25:46.714676 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:25:46.764853 sshd[1580]: Connection closed by 10.0.0.1 port 36058 Sep 10 23:25:46.765509 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Sep 10 23:25:46.780004 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:36058.service: Deactivated successfully. Sep 10 23:25:46.781542 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:25:46.782211 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:25:46.791785 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:36072.service - OpenSSH per-connection server daemon (10.0.0.1:36072). Sep 10 23:25:46.792871 systemd-logind[1435]: Removed session 4. Sep 10 23:25:46.826493 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:46.827840 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:46.831982 systemd-logind[1435]: New session 5 of user core. Sep 10 23:25:46.846705 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:25:46.903068 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:25:46.903336 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:25:46.924433 sudo[1589]: pam_unix(sudo:session): session closed for user root Sep 10 23:25:46.926382 sshd[1588]: Connection closed by 10.0.0.1 port 36072 Sep 10 23:25:46.926279 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Sep 10 23:25:46.941694 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:36072.service: Deactivated successfully. Sep 10 23:25:46.943298 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:25:46.943997 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:25:46.974458 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:36074.service - OpenSSH per-connection server daemon (10.0.0.1:36074). Sep 10 23:25:46.975778 systemd-logind[1435]: Removed session 5. Sep 10 23:25:47.009831 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 36074 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:47.011201 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:47.015176 systemd-logind[1435]: New session 6 of user core. Sep 10 23:25:47.030676 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:25:47.082944 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:25:47.083237 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:25:47.086273 sudo[1599]: pam_unix(sudo:session): session closed for user root Sep 10 23:25:47.093139 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:25:47.093414 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:25:47.115953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:25:47.140688 augenrules[1621]: No rules Sep 10 23:25:47.141888 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:25:47.142109 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:25:47.143410 sudo[1598]: pam_unix(sudo:session): session closed for user root Sep 10 23:25:47.144614 sshd[1597]: Connection closed by 10.0.0.1 port 36074 Sep 10 23:25:47.145103 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Sep 10 23:25:47.160204 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:36074.service: Deactivated successfully. Sep 10 23:25:47.161668 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:25:47.162893 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:25:47.174903 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:36084.service - OpenSSH per-connection server daemon (10.0.0.1:36084). Sep 10 23:25:47.175839 systemd-logind[1435]: Removed session 6. Sep 10 23:25:47.211670 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 36084 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:25:47.212864 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:25:47.217457 systemd-logind[1435]: New session 7 of user core. Sep 10 23:25:47.226682 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:25:47.277034 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:25:47.277304 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:25:47.590792 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:25:47.590869 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:25:47.799441 dockerd[1655]: time="2025-09-10T23:25:47.799005485Z" level=info msg="Starting up" Sep 10 23:25:48.042014 dockerd[1655]: time="2025-09-10T23:25:48.041912461Z" level=info msg="Loading containers: start." Sep 10 23:25:48.179542 kernel: Initializing XFRM netlink socket Sep 10 23:25:48.252546 systemd-networkd[1368]: docker0: Link UP Sep 10 23:25:48.286932 dockerd[1655]: time="2025-09-10T23:25:48.286830980Z" level=info msg="Loading containers: done." Sep 10 23:25:48.301141 dockerd[1655]: time="2025-09-10T23:25:48.300657651Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:25:48.301141 dockerd[1655]: time="2025-09-10T23:25:48.300746293Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 10 23:25:48.301141 dockerd[1655]: time="2025-09-10T23:25:48.300900578Z" level=info msg="Daemon has completed initialization" Sep 10 23:25:48.327502 dockerd[1655]: time="2025-09-10T23:25:48.327433975Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:25:48.327648 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:25:48.859792 containerd[1447]: time="2025-09-10T23:25:48.859558488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 23:25:49.654824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503256991.mount: Deactivated successfully. Sep 10 23:25:50.815721 containerd[1447]: time="2025-09-10T23:25:50.815660341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:50.816830 containerd[1447]: time="2025-09-10T23:25:50.816536331Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 10 23:25:50.817602 containerd[1447]: time="2025-09-10T23:25:50.817567084Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:50.820972 containerd[1447]: time="2025-09-10T23:25:50.820941288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:50.822114 containerd[1447]: time="2025-09-10T23:25:50.822082655Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.962483273s" Sep 10 23:25:50.822213 containerd[1447]: time="2025-09-10T23:25:50.822198618Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 23:25:50.822949 containerd[1447]: time="2025-09-10T23:25:50.822920637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 23:25:52.056803 containerd[1447]: time="2025-09-10T23:25:52.056754504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:52.057279 containerd[1447]: time="2025-09-10T23:25:52.057244827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 10 23:25:52.058119 containerd[1447]: time="2025-09-10T23:25:52.058092576Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:52.062029 containerd[1447]: time="2025-09-10T23:25:52.061638077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:52.062761 containerd[1447]: time="2025-09-10T23:25:52.062738157Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.239784211s" Sep 10 23:25:52.062802 containerd[1447]: time="2025-09-10T23:25:52.062766715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 23:25:52.063175 containerd[1447]: time="2025-09-10T23:25:52.063151069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 23:25:53.156259 containerd[1447]: time="2025-09-10T23:25:53.155177566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:53.156259 containerd[1447]: time="2025-09-10T23:25:53.156210205Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 10 23:25:53.156708 containerd[1447]: time="2025-09-10T23:25:53.156682995Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:53.159708 containerd[1447]: time="2025-09-10T23:25:53.159672782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:53.161397 containerd[1447]: time="2025-09-10T23:25:53.161368272Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.098122149s" Sep 10 23:25:53.161507 containerd[1447]: time="2025-09-10T23:25:53.161490117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 23:25:53.162081 containerd[1447]: time="2025-09-10T23:25:53.162060501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 23:25:53.762624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:25:53.771767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:25:53.877863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:25:53.881312 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:25:53.922715 kubelet[1928]: E0910 23:25:53.922638 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:25:53.925878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:25:53.926023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:25:53.926498 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.7M memory peak. Sep 10 23:25:54.277004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461195093.mount: Deactivated successfully. Sep 10 23:25:54.637079 containerd[1447]: time="2025-09-10T23:25:54.636946177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:54.637920 containerd[1447]: time="2025-09-10T23:25:54.637791794Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 10 23:25:54.638746 containerd[1447]: time="2025-09-10T23:25:54.638708983Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:54.641030 containerd[1447]: time="2025-09-10T23:25:54.640986361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:54.641644 containerd[1447]: time="2025-09-10T23:25:54.641610896Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.479448024s" Sep 10 23:25:54.641701 containerd[1447]: time="2025-09-10T23:25:54.641644076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 23:25:54.642081 containerd[1447]: time="2025-09-10T23:25:54.642054850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:25:55.170910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530607865.mount: Deactivated successfully. Sep 10 23:25:55.820024 containerd[1447]: time="2025-09-10T23:25:55.819976967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:55.820918 containerd[1447]: time="2025-09-10T23:25:55.820663474Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:25:55.821730 containerd[1447]: time="2025-09-10T23:25:55.821696339Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:55.825140 containerd[1447]: time="2025-09-10T23:25:55.825105065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:55.827446 containerd[1447]: time="2025-09-10T23:25:55.826474774Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.184388091s" Sep 10 23:25:55.827446 containerd[1447]: time="2025-09-10T23:25:55.826507260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:25:55.827666 containerd[1447]: time="2025-09-10T23:25:55.827596021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:25:56.263675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959581937.mount: Deactivated successfully. Sep 10 23:25:56.269324 containerd[1447]: time="2025-09-10T23:25:56.269272629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:56.270113 containerd[1447]: time="2025-09-10T23:25:56.270065981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:25:56.271137 containerd[1447]: time="2025-09-10T23:25:56.271105115Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:56.273700 containerd[1447]: time="2025-09-10T23:25:56.273667855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:56.274453 containerd[1447]: time="2025-09-10T23:25:56.274419420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.741894ms" Sep 10 23:25:56.274497 containerd[1447]: time="2025-09-10T23:25:56.274460052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:25:56.274930 containerd[1447]: time="2025-09-10T23:25:56.274892539Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 23:25:56.843894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301968697.mount: Deactivated successfully. Sep 10 23:25:58.701341 containerd[1447]: time="2025-09-10T23:25:58.700989749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:58.701834 containerd[1447]: time="2025-09-10T23:25:58.701409818Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 10 23:25:58.702554 containerd[1447]: time="2025-09-10T23:25:58.702192337Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:58.705497 containerd[1447]: time="2025-09-10T23:25:58.705439963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:25:58.707117 containerd[1447]: time="2025-09-10T23:25:58.706819869Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.431896689s" Sep 10 23:25:58.707117 containerd[1447]: time="2025-09-10T23:25:58.706851967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 23:26:04.176426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:26:04.184703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:04.292069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:04.296038 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:26:04.333147 kubelet[2085]: E0910 23:26:04.333086 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:26:04.335652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:26:04.335800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:26:04.337650 systemd[1]: kubelet.service: Consumed 132ms CPU time, 107.5M memory peak. Sep 10 23:26:04.613496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:04.613661 systemd[1]: kubelet.service: Consumed 132ms CPU time, 107.5M memory peak. Sep 10 23:26:04.626744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:04.648929 systemd[1]: Reload requested from client PID 2100 ('systemctl') (unit session-7.scope)... Sep 10 23:26:04.648956 systemd[1]: Reloading... Sep 10 23:26:04.725553 zram_generator::config[2144]: No configuration found. Sep 10 23:26:04.930505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:26:05.004544 systemd[1]: Reloading finished in 355 ms. Sep 10 23:26:05.050249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:05.053012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:05.054375 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:26:05.054632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:05.054677 systemd[1]: kubelet.service: Consumed 91ms CPU time, 94.9M memory peak. Sep 10 23:26:05.056199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:05.157019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:05.160460 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:26:05.192190 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:26:05.192190 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:26:05.192190 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:26:05.192190 kubelet[2191]: I0910 23:26:05.192166 2191 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:26:05.878777 kubelet[2191]: I0910 23:26:05.878720 2191 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:26:05.878910 kubelet[2191]: I0910 23:26:05.878846 2191 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:26:05.879548 kubelet[2191]: I0910 23:26:05.879107 2191 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:26:05.899227 kubelet[2191]: E0910 23:26:05.899181 2191 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:05.900190 kubelet[2191]: I0910 23:26:05.900164 2191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:26:05.904962 kubelet[2191]: E0910 23:26:05.904930 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 23:26:05.904962 kubelet[2191]: I0910 23:26:05.904962 2191 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 23:26:05.907755 kubelet[2191]: I0910 23:26:05.907722 2191 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:26:05.908375 kubelet[2191]: I0910 23:26:05.908331 2191 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:26:05.908586 kubelet[2191]: I0910 23:26:05.908380 2191 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:26:05.908687 kubelet[2191]: I0910 23:26:05.908668 2191 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:26:05.908687 kubelet[2191]: I0910 23:26:05.908679 2191 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:26:05.908911 kubelet[2191]: I0910 23:26:05.908896 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:26:05.911821 kubelet[2191]: I0910 23:26:05.911778 2191 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:26:05.911821 kubelet[2191]: I0910 23:26:05.911808 2191 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:26:05.911897 kubelet[2191]: I0910 23:26:05.911832 2191 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:26:05.911897 kubelet[2191]: I0910 23:26:05.911843 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:26:05.917128 kubelet[2191]: W0910 23:26:05.917067 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:05.917163 kubelet[2191]: E0910 23:26:05.917144 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:05.917248 kubelet[2191]: I0910 23:26:05.917225 2191 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 10 23:26:05.918087 kubelet[2191]: I0910 23:26:05.918055 2191 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:26:05.918663 kubelet[2191]: W0910 23:26:05.918601 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:05.918727 kubelet[2191]: E0910 23:26:05.918659 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:05.920857 kubelet[2191]: W0910 23:26:05.920559 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:26:05.921674 kubelet[2191]: I0910 23:26:05.921649 2191 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:26:05.921743 kubelet[2191]: I0910 23:26:05.921681 2191 server.go:1287] "Started kubelet" Sep 10 23:26:05.922090 kubelet[2191]: I0910 23:26:05.922051 2191 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:26:05.922722 kubelet[2191]: I0910 23:26:05.922664 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:26:05.922916 kubelet[2191]: I0910 23:26:05.922884 2191 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:26:05.923021 kubelet[2191]: I0910 23:26:05.922993 2191 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:26:05.924331 kubelet[2191]: I0910 23:26:05.924304 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:26:05.924712 kubelet[2191]: I0910 23:26:05.924612 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:26:05.925947 kubelet[2191]: E0910 23:26:05.924467 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18640f78c2e5bbec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:26:05.92166398 +0000 UTC m=+0.758477055,LastTimestamp:2025-09-10 23:26:05.92166398 +0000 UTC m=+0.758477055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:26:05.925947 kubelet[2191]: E0910 23:26:05.925437 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:26:05.925947 kubelet[2191]: I0910 23:26:05.925463 2191 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:26:05.925947 kubelet[2191]: I0910 23:26:05.925537 2191 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:26:05.925947 kubelet[2191]: I0910 23:26:05.925583 2191 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:26:05.925947 kubelet[2191]: W0910 23:26:05.925855 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:05.925947 kubelet[2191]: E0910 23:26:05.925893 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:05.926166 kubelet[2191]: E0910 23:26:05.925873 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Sep 10 23:26:05.926618 kubelet[2191]: I0910 23:26:05.926587 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:26:05.927123 kubelet[2191]: E0910 23:26:05.927089 2191 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:26:05.927537 kubelet[2191]: I0910 23:26:05.927479 2191 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:26:05.927537 kubelet[2191]: I0910 23:26:05.927496 2191 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:26:05.941684 kubelet[2191]: I0910 23:26:05.941662 2191 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:26:05.941684 kubelet[2191]: I0910 23:26:05.941683 2191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:26:05.941814 kubelet[2191]: I0910 23:26:05.941700 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:26:05.942317 kubelet[2191]: I0910 23:26:05.942271 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:26:05.943329 kubelet[2191]: I0910 23:26:05.943298 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:26:05.943329 kubelet[2191]: I0910 23:26:05.943321 2191 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:26:05.943389 kubelet[2191]: I0910 23:26:05.943340 2191 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:26:05.943389 kubelet[2191]: I0910 23:26:05.943349 2191 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:26:05.943436 kubelet[2191]: E0910 23:26:05.943386 2191 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:26:05.945927 kubelet[2191]: W0910 23:26:05.945901 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:05.946050 kubelet[2191]: E0910 23:26:05.945938 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:06.026435 kubelet[2191]: E0910 23:26:06.026389 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:26:06.028427 kubelet[2191]: I0910 23:26:06.028096 2191 policy_none.go:49] "None policy: Start" Sep 10 23:26:06.028427 kubelet[2191]: I0910 23:26:06.028127 2191 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:26:06.028427 kubelet[2191]: I0910 23:26:06.028139 2191 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:26:06.033411 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:26:06.043621 kubelet[2191]: E0910 23:26:06.043585 2191 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:26:06.045636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:26:06.048838 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:26:06.060600 kubelet[2191]: I0910 23:26:06.060385 2191 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:26:06.060686 kubelet[2191]: I0910 23:26:06.060622 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:26:06.060686 kubelet[2191]: I0910 23:26:06.060639 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:26:06.061279 kubelet[2191]: I0910 23:26:06.061011 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:26:06.061970 kubelet[2191]: E0910 23:26:06.061948 2191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:26:06.062050 kubelet[2191]: E0910 23:26:06.061987 2191 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:26:06.127399 kubelet[2191]: E0910 23:26:06.127353 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Sep 10 23:26:06.164145 kubelet[2191]: I0910 23:26:06.163103 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:26:06.164145 kubelet[2191]: E0910 23:26:06.163554 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Sep 10 23:26:06.252412 systemd[1]: Created slice kubepods-burstable-pod77b96cc29ce66cf31be8325780e6606c.slice - libcontainer container kubepods-burstable-pod77b96cc29ce66cf31be8325780e6606c.slice. Sep 10 23:26:06.277400 kubelet[2191]: E0910 23:26:06.277199 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:06.280713 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 23:26:06.292872 kubelet[2191]: E0910 23:26:06.292833 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:06.295670 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 23:26:06.297231 kubelet[2191]: E0910 23:26:06.297198 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:06.328514 kubelet[2191]: I0910 23:26:06.328473 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:06.328514 kubelet[2191]: I0910 23:26:06.328513 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:06.328664 kubelet[2191]: I0910 23:26:06.328549 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:06.328664 kubelet[2191]: I0910 23:26:06.328566 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:06.328664 kubelet[2191]: I0910 23:26:06.328581 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:06.328664 kubelet[2191]: I0910 23:26:06.328596 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:06.328664 kubelet[2191]: I0910 23:26:06.328636 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:06.328765 kubelet[2191]: I0910 23:26:06.328669 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:06.328765 kubelet[2191]: I0910 23:26:06.328693 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:06.365636 kubelet[2191]: I0910 23:26:06.365587 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:26:06.365965 kubelet[2191]: E0910 23:26:06.365932 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Sep 10 23:26:06.528485 kubelet[2191]: E0910 23:26:06.528371 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Sep 10 23:26:06.578380 containerd[1447]: time="2025-09-10T23:26:06.578344151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77b96cc29ce66cf31be8325780e6606c,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:06.593978 containerd[1447]: time="2025-09-10T23:26:06.593938801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:06.598859 containerd[1447]: time="2025-09-10T23:26:06.598806151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:06.767974 kubelet[2191]: I0910 23:26:06.767937 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:26:06.768265 kubelet[2191]: E0910 23:26:06.768244 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Sep 10 23:26:07.054344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175726598.mount: Deactivated successfully. Sep 10 23:26:07.060059 containerd[1447]: time="2025-09-10T23:26:07.060002125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:26:07.063062 containerd[1447]: time="2025-09-10T23:26:07.062997517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 10 23:26:07.063718 containerd[1447]: time="2025-09-10T23:26:07.063662645Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:26:07.064666 containerd[1447]: time="2025-09-10T23:26:07.064640155Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:26:07.065563 containerd[1447]: time="2025-09-10T23:26:07.065500417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 23:26:07.065658 containerd[1447]: time="2025-09-10T23:26:07.065634889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:26:07.066455 containerd[1447]: time="2025-09-10T23:26:07.066415027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 23:26:07.068592 containerd[1447]: time="2025-09-10T23:26:07.068558629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:26:07.070985 containerd[1447]: time="2025-09-10T23:26:07.070945199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.068325ms" Sep 10 23:26:07.073293 containerd[1447]: time="2025-09-10T23:26:07.073256562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.834334ms" Sep 10 23:26:07.073969 containerd[1447]: time="2025-09-10T23:26:07.073941031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.921597ms" Sep 10 23:26:07.099480 kubelet[2191]: W0910 23:26:07.099425 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:07.099612 kubelet[2191]: E0910 23:26:07.099491 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:07.154299 containerd[1447]: time="2025-09-10T23:26:07.154188163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:07.154299 containerd[1447]: time="2025-09-10T23:26:07.154260214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:07.154299 containerd[1447]: time="2025-09-10T23:26:07.154276359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.154562 containerd[1447]: time="2025-09-10T23:26:07.154357722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.154662 containerd[1447]: time="2025-09-10T23:26:07.154060404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:07.154662 containerd[1447]: time="2025-09-10T23:26:07.154537950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:07.154662 containerd[1447]: time="2025-09-10T23:26:07.154551058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.154662 containerd[1447]: time="2025-09-10T23:26:07.154619633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.155621 containerd[1447]: time="2025-09-10T23:26:07.155297508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:07.155621 containerd[1447]: time="2025-09-10T23:26:07.155352696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:07.155621 containerd[1447]: time="2025-09-10T23:26:07.155370998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.155621 containerd[1447]: time="2025-09-10T23:26:07.155438414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:07.164769 kubelet[2191]: W0910 23:26:07.164662 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:07.164769 kubelet[2191]: E0910 23:26:07.164732 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:07.177708 systemd[1]: Started cri-containerd-af7f8d021748d4c6446ed7d812ee91ed95b0892206654075b15b9a42eb96743e.scope - libcontainer container af7f8d021748d4c6446ed7d812ee91ed95b0892206654075b15b9a42eb96743e. Sep 10 23:26:07.179293 systemd[1]: Started cri-containerd-ba95061dc569f6fe770f2d5109c1063f8ba3ee64a6f1998175968511cd145f70.scope - libcontainer container ba95061dc569f6fe770f2d5109c1063f8ba3ee64a6f1998175968511cd145f70. Sep 10 23:26:07.180520 systemd[1]: Started cri-containerd-fb1641bf49c4b4a7b665ccc3aa15027f883d197fbe7523569c7d4469ac7edd26.scope - libcontainer container fb1641bf49c4b4a7b665ccc3aa15027f883d197fbe7523569c7d4469ac7edd26. Sep 10 23:26:07.214519 containerd[1447]: time="2025-09-10T23:26:07.214481389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77b96cc29ce66cf31be8325780e6606c,Namespace:kube-system,Attempt:0,} returns sandbox id \"af7f8d021748d4c6446ed7d812ee91ed95b0892206654075b15b9a42eb96743e\"" Sep 10 23:26:07.218149 containerd[1447]: time="2025-09-10T23:26:07.217477341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba95061dc569f6fe770f2d5109c1063f8ba3ee64a6f1998175968511cd145f70\"" Sep 10 23:26:07.219157 containerd[1447]: time="2025-09-10T23:26:07.219092804Z" level=info msg="CreateContainer within sandbox \"af7f8d021748d4c6446ed7d812ee91ed95b0892206654075b15b9a42eb96743e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:26:07.220544 containerd[1447]: time="2025-09-10T23:26:07.220498068Z" level=info msg="CreateContainer within sandbox \"ba95061dc569f6fe770f2d5109c1063f8ba3ee64a6f1998175968511cd145f70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:26:07.221768 containerd[1447]: time="2025-09-10T23:26:07.221738768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb1641bf49c4b4a7b665ccc3aa15027f883d197fbe7523569c7d4469ac7edd26\"" Sep 10 23:26:07.227423 containerd[1447]: time="2025-09-10T23:26:07.227323538Z" level=info msg="CreateContainer within sandbox \"fb1641bf49c4b4a7b665ccc3aa15027f883d197fbe7523569c7d4469ac7edd26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:26:07.240834 containerd[1447]: time="2025-09-10T23:26:07.240786096Z" level=info msg="CreateContainer within sandbox \"ba95061dc569f6fe770f2d5109c1063f8ba3ee64a6f1998175968511cd145f70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"604513fad402d2ff2c18ce404f1ab9642f14ca6f35399e77b98a0838b1ba9158\"" Sep 10 23:26:07.241638 containerd[1447]: time="2025-09-10T23:26:07.241604318Z" level=info msg="StartContainer for \"604513fad402d2ff2c18ce404f1ab9642f14ca6f35399e77b98a0838b1ba9158\"" Sep 10 23:26:07.242144 containerd[1447]: time="2025-09-10T23:26:07.242101166Z" level=info msg="CreateContainer within sandbox \"af7f8d021748d4c6446ed7d812ee91ed95b0892206654075b15b9a42eb96743e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68217a6b5ecd8d4cf630fc899df9946908236c177dd9d57894e335cc9f246fbb\"" Sep 10 23:26:07.242438 containerd[1447]: time="2025-09-10T23:26:07.242416985Z" level=info msg="StartContainer for \"68217a6b5ecd8d4cf630fc899df9946908236c177dd9d57894e335cc9f246fbb\"" Sep 10 23:26:07.244794 containerd[1447]: time="2025-09-10T23:26:07.244659653Z" level=info msg="CreateContainer within sandbox \"fb1641bf49c4b4a7b665ccc3aa15027f883d197fbe7523569c7d4469ac7edd26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"969330d48617fd1f9c69ef011e4e3e3fe61ef58e86e3acb4845d90624611c15f\"" Sep 10 23:26:07.245200 containerd[1447]: time="2025-09-10T23:26:07.245170607Z" level=info msg="StartContainer for \"969330d48617fd1f9c69ef011e4e3e3fe61ef58e86e3acb4845d90624611c15f\"" Sep 10 23:26:07.271678 systemd[1]: Started cri-containerd-604513fad402d2ff2c18ce404f1ab9642f14ca6f35399e77b98a0838b1ba9158.scope - libcontainer container 604513fad402d2ff2c18ce404f1ab9642f14ca6f35399e77b98a0838b1ba9158. Sep 10 23:26:07.272626 systemd[1]: Started cri-containerd-68217a6b5ecd8d4cf630fc899df9946908236c177dd9d57894e335cc9f246fbb.scope - libcontainer container 68217a6b5ecd8d4cf630fc899df9946908236c177dd9d57894e335cc9f246fbb. Sep 10 23:26:07.276626 systemd[1]: Started cri-containerd-969330d48617fd1f9c69ef011e4e3e3fe61ef58e86e3acb4845d90624611c15f.scope - libcontainer container 969330d48617fd1f9c69ef011e4e3e3fe61ef58e86e3acb4845d90624611c15f. Sep 10 23:26:07.310073 containerd[1447]: time="2025-09-10T23:26:07.309976142Z" level=info msg="StartContainer for \"604513fad402d2ff2c18ce404f1ab9642f14ca6f35399e77b98a0838b1ba9158\" returns successfully" Sep 10 23:26:07.316729 containerd[1447]: time="2025-09-10T23:26:07.316100319Z" level=info msg="StartContainer for \"68217a6b5ecd8d4cf630fc899df9946908236c177dd9d57894e335cc9f246fbb\" returns successfully" Sep 10 23:26:07.321889 containerd[1447]: time="2025-09-10T23:26:07.321794544Z" level=info msg="StartContainer for \"969330d48617fd1f9c69ef011e4e3e3fe61ef58e86e3acb4845d90624611c15f\" returns successfully" Sep 10 23:26:07.329655 kubelet[2191]: E0910 23:26:07.329611 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="1.6s" Sep 10 23:26:07.385299 kubelet[2191]: W0910 23:26:07.385241 2191 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:26:07.385418 kubelet[2191]: E0910 23:26:07.385307 2191 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:26:07.569882 kubelet[2191]: I0910 23:26:07.569779 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:26:07.953936 kubelet[2191]: E0910 23:26:07.953201 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:07.955154 kubelet[2191]: E0910 23:26:07.954981 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:07.956096 kubelet[2191]: E0910 23:26:07.956076 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:08.958377 kubelet[2191]: E0910 23:26:08.958086 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:08.958377 kubelet[2191]: E0910 23:26:08.958232 2191 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:26:09.099150 kubelet[2191]: E0910 23:26:09.099106 2191 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:26:09.242540 kubelet[2191]: E0910 23:26:09.242363 2191 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18640f78c2e5bbec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:26:05.92166398 +0000 UTC m=+0.758477055,LastTimestamp:2025-09-10 23:26:05.92166398 +0000 UTC m=+0.758477055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:26:09.295188 kubelet[2191]: I0910 23:26:09.295072 2191 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:26:09.295188 kubelet[2191]: E0910 23:26:09.295131 2191 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 23:26:09.325820 kubelet[2191]: I0910 23:26:09.325777 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:09.333365 kubelet[2191]: E0910 23:26:09.333334 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:09.333583 kubelet[2191]: I0910 23:26:09.333412 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:09.335987 kubelet[2191]: E0910 23:26:09.335800 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:09.335987 kubelet[2191]: I0910 23:26:09.335826 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:09.337604 kubelet[2191]: E0910 23:26:09.337575 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:09.917100 kubelet[2191]: I0910 23:26:09.916843 2191 apiserver.go:52] "Watching apiserver" Sep 10 23:26:09.926464 kubelet[2191]: I0910 23:26:09.926439 2191 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:26:09.958500 kubelet[2191]: I0910 23:26:09.958364 2191 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:09.961182 kubelet[2191]: E0910 23:26:09.961004 2191 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:11.097577 systemd[1]: Reload requested from client PID 2472 ('systemctl') (unit session-7.scope)... Sep 10 23:26:11.097591 systemd[1]: Reloading... Sep 10 23:26:11.164564 zram_generator::config[2516]: No configuration found. Sep 10 23:26:11.400462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:26:11.486900 systemd[1]: Reloading finished in 389 ms. Sep 10 23:26:11.504559 kubelet[2191]: I0910 23:26:11.504510 2191 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:26:11.504716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:11.516977 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:26:11.517190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:11.517237 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 127.8M memory peak. Sep 10 23:26:11.528801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:26:11.635940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:26:11.639434 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:26:11.675281 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:26:11.675281 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:26:11.675281 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:26:11.675281 kubelet[2558]: I0910 23:26:11.675235 2558 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:26:11.684579 kubelet[2558]: I0910 23:26:11.683773 2558 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:26:11.684579 kubelet[2558]: I0910 23:26:11.683798 2558 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:26:11.684579 kubelet[2558]: I0910 23:26:11.684029 2558 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:26:11.685207 kubelet[2558]: I0910 23:26:11.685191 2558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:26:11.687336 kubelet[2558]: I0910 23:26:11.687303 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:26:11.692559 kubelet[2558]: E0910 23:26:11.692068 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 23:26:11.692559 kubelet[2558]: I0910 23:26:11.692096 2558 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 23:26:11.694500 kubelet[2558]: I0910 23:26:11.694378 2558 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:26:11.694714 kubelet[2558]: I0910 23:26:11.694684 2558 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:26:11.694885 kubelet[2558]: I0910 23:26:11.694712 2558 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:26:11.694959 kubelet[2558]: I0910 23:26:11.694896 2558 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:26:11.694959 kubelet[2558]: I0910 23:26:11.694905 2558 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:26:11.695014 kubelet[2558]: I0910 23:26:11.694963 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:26:11.695134 kubelet[2558]: I0910 23:26:11.695115 2558 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:26:11.695134 kubelet[2558]: I0910 23:26:11.695130 2558 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:26:11.695291 kubelet[2558]: I0910 23:26:11.695145 2558 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:26:11.695291 kubelet[2558]: I0910 23:26:11.695154 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:26:11.696306 kubelet[2558]: I0910 23:26:11.695971 2558 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 10 23:26:11.697344 kubelet[2558]: I0910 23:26:11.697073 2558 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:26:11.699413 kubelet[2558]: I0910 23:26:11.698428 2558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:26:11.699738 kubelet[2558]: I0910 23:26:11.699722 2558 server.go:1287] "Started kubelet" Sep 10 23:26:11.700078 kubelet[2558]: I0910 23:26:11.699920 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:26:11.700283 kubelet[2558]: I0910 23:26:11.700252 2558 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:26:11.704020 kubelet[2558]: I0910 23:26:11.700836 2558 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:26:11.705527 kubelet[2558]: I0910 23:26:11.704644 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:26:11.705527 kubelet[2558]: I0910 23:26:11.705456 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:26:11.706186 kubelet[2558]: I0910 23:26:11.706122 2558 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:26:11.708118 kubelet[2558]: I0910 23:26:11.707272 2558 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:26:11.708118 kubelet[2558]: I0910 23:26:11.707379 2558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:26:11.712603 kubelet[2558]: I0910 23:26:11.710314 2558 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:26:11.712603 kubelet[2558]: E0910 23:26:11.710419 2558 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:26:11.712603 kubelet[2558]: I0910 23:26:11.710913 2558 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:26:11.712603 kubelet[2558]: I0910 23:26:11.711026 2558 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:26:11.715624 kubelet[2558]: I0910 23:26:11.715022 2558 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:26:11.727266 kubelet[2558]: I0910 23:26:11.727235 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:26:11.731169 kubelet[2558]: I0910 23:26:11.731143 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:26:11.731169 kubelet[2558]: I0910 23:26:11.731167 2558 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:26:11.731925 kubelet[2558]: I0910 23:26:11.731221 2558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:26:11.731925 kubelet[2558]: I0910 23:26:11.731230 2558 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:26:11.731925 kubelet[2558]: E0910 23:26:11.731271 2558 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:26:11.756812 kubelet[2558]: I0910 23:26:11.756785 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:26:11.756812 kubelet[2558]: I0910 23:26:11.756806 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:26:11.756939 kubelet[2558]: I0910 23:26:11.756826 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:26:11.756994 kubelet[2558]: I0910 23:26:11.756975 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:26:11.757031 kubelet[2558]: I0910 23:26:11.756992 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:26:11.757031 kubelet[2558]: I0910 23:26:11.757010 2558 policy_none.go:49] "None policy: Start" Sep 10 23:26:11.757031 kubelet[2558]: I0910 23:26:11.757017 2558 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:26:11.757031 kubelet[2558]: I0910 23:26:11.757026 2558 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:26:11.757127 kubelet[2558]: I0910 23:26:11.757117 2558 state_mem.go:75] "Updated machine memory state" Sep 10 23:26:11.760819 kubelet[2558]: I0910 23:26:11.760499 2558 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:26:11.760819 kubelet[2558]: I0910 23:26:11.760672 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:26:11.760819 kubelet[2558]: I0910 23:26:11.760682 2558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:26:11.760934 kubelet[2558]: I0910 23:26:11.760849 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:26:11.761946 kubelet[2558]: E0910 23:26:11.761927 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:26:11.832763 kubelet[2558]: I0910 23:26:11.832727 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:11.832869 kubelet[2558]: I0910 23:26:11.832780 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.832869 kubelet[2558]: I0910 23:26:11.832729 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:11.862872 kubelet[2558]: I0910 23:26:11.862822 2558 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:26:11.869243 kubelet[2558]: I0910 23:26:11.869196 2558 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:26:11.869346 kubelet[2558]: I0910 23:26:11.869285 2558 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:26:11.912238 kubelet[2558]: I0910 23:26:11.912193 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.912238 kubelet[2558]: I0910 23:26:11.912230 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.912238 kubelet[2558]: I0910 23:26:11.912249 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.912428 kubelet[2558]: I0910 23:26:11.912267 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:11.912428 kubelet[2558]: I0910 23:26:11.912287 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.912428 kubelet[2558]: I0910 23:26:11.912325 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:26:11.912428 kubelet[2558]: I0910 23:26:11.912356 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:11.912428 kubelet[2558]: I0910 23:26:11.912376 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:11.912579 kubelet[2558]: I0910 23:26:11.912394 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77b96cc29ce66cf31be8325780e6606c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77b96cc29ce66cf31be8325780e6606c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:12.095372 sudo[2596]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 23:26:12.096905 sudo[2596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 23:26:12.544518 sudo[2596]: pam_unix(sudo:session): session closed for user root Sep 10 23:26:12.695896 kubelet[2558]: I0910 23:26:12.695861 2558 apiserver.go:52] "Watching apiserver" Sep 10 23:26:12.711823 kubelet[2558]: I0910 23:26:12.711785 2558 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:26:12.740387 kubelet[2558]: I0910 23:26:12.740354 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:12.740724 kubelet[2558]: I0910 23:26:12.740703 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:12.751637 kubelet[2558]: E0910 23:26:12.750662 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 23:26:12.753559 kubelet[2558]: E0910 23:26:12.751809 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:26:12.777438 kubelet[2558]: I0910 23:26:12.777351 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.777339114 podStartE2EDuration="1.777339114s" podCreationTimestamp="2025-09-10 23:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:12.777099631 +0000 UTC m=+1.134745466" watchObservedRunningTime="2025-09-10 23:26:12.777339114 +0000 UTC m=+1.134984949" Sep 10 23:26:12.777545 kubelet[2558]: I0910 23:26:12.777477 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7774718090000001 podStartE2EDuration="1.777471809s" podCreationTimestamp="2025-09-10 23:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:12.767341868 +0000 UTC m=+1.124987703" watchObservedRunningTime="2025-09-10 23:26:12.777471809 +0000 UTC m=+1.135117644" Sep 10 23:26:12.814288 kubelet[2558]: I0910 23:26:12.814115 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.814099193 podStartE2EDuration="1.814099193s" podCreationTimestamp="2025-09-10 23:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:12.792858308 +0000 UTC m=+1.150504143" watchObservedRunningTime="2025-09-10 23:26:12.814099193 +0000 UTC m=+1.171744988" Sep 10 23:26:14.028363 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 10 23:26:14.029586 sshd[1633]: Connection closed by 10.0.0.1 port 36084 Sep 10 23:26:14.030126 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:14.033519 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:36084.service: Deactivated successfully. Sep 10 23:26:14.036318 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:26:14.036556 systemd[1]: session-7.scope: Consumed 7.926s CPU time, 257.5M memory peak. Sep 10 23:26:14.037421 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:26:14.038634 systemd-logind[1435]: Removed session 7. Sep 10 23:26:15.562176 kubelet[2558]: I0910 23:26:15.562140 2558 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:26:15.568541 containerd[1447]: time="2025-09-10T23:26:15.568385968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:26:15.568828 kubelet[2558]: I0910 23:26:15.568669 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:26:16.182148 systemd[1]: Created slice kubepods-besteffort-podf4ffea87_b1ef_4cbf_9b24_0200c098537b.slice - libcontainer container kubepods-besteffort-podf4ffea87_b1ef_4cbf_9b24_0200c098537b.slice. Sep 10 23:26:16.215114 systemd[1]: Created slice kubepods-burstable-podffb4bf92_b9d9_4249_8d46_47c84c3389c4.slice - libcontainer container kubepods-burstable-podffb4bf92_b9d9_4249_8d46_47c84c3389c4.slice. Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239597 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-clustermesh-secrets\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239657 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4ffea87-b1ef-4cbf-9b24-0200c098537b-kube-proxy\") pod \"kube-proxy-ts8hc\" (UID: \"f4ffea87-b1ef-4cbf-9b24-0200c098537b\") " pod="kube-system/kube-proxy-ts8hc" Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239675 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hubble-tls\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239692 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8kx\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-kube-api-access-pn8kx\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239713 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4ffea87-b1ef-4cbf-9b24-0200c098537b-lib-modules\") pod \"kube-proxy-ts8hc\" (UID: \"f4ffea87-b1ef-4cbf-9b24-0200c098537b\") " pod="kube-system/kube-proxy-ts8hc" Sep 10 23:26:16.239966 kubelet[2558]: I0910 23:26:16.239728 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hostproc\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239744 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-xtables-lock\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239768 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4ffea87-b1ef-4cbf-9b24-0200c098537b-xtables-lock\") pod \"kube-proxy-ts8hc\" (UID: \"f4ffea87-b1ef-4cbf-9b24-0200c098537b\") " pod="kube-system/kube-proxy-ts8hc" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239802 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp487\" (UniqueName: \"kubernetes.io/projected/f4ffea87-b1ef-4cbf-9b24-0200c098537b-kube-api-access-mp487\") pod \"kube-proxy-ts8hc\" (UID: \"f4ffea87-b1ef-4cbf-9b24-0200c098537b\") " pod="kube-system/kube-proxy-ts8hc" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239824 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-bpf-maps\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239841 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cni-path\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240229 kubelet[2558]: I0910 23:26:16.239857 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-net\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240362 kubelet[2558]: I0910 23:26:16.239873 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-lib-modules\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240362 kubelet[2558]: I0910 23:26:16.239888 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-kernel\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240362 kubelet[2558]: I0910 23:26:16.239907 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-run\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240362 kubelet[2558]: I0910 23:26:16.239922 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-etc-cni-netd\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240458 kubelet[2558]: I0910 23:26:16.240355 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-config-path\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.240458 kubelet[2558]: I0910 23:26:16.240408 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-cgroup\") pod \"cilium-w8pc5\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " pod="kube-system/cilium-w8pc5" Sep 10 23:26:16.515056 containerd[1447]: time="2025-09-10T23:26:16.514676377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts8hc,Uid:f4ffea87-b1ef-4cbf-9b24-0200c098537b,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:16.519377 containerd[1447]: time="2025-09-10T23:26:16.519332169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8pc5,Uid:ffb4bf92-b9d9-4249-8d46-47c84c3389c4,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:16.547562 containerd[1447]: time="2025-09-10T23:26:16.547403037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:16.547562 containerd[1447]: time="2025-09-10T23:26:16.547484234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:16.547562 containerd[1447]: time="2025-09-10T23:26:16.547495194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.547778 containerd[1447]: time="2025-09-10T23:26:16.547652870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.548912 containerd[1447]: time="2025-09-10T23:26:16.548293212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:16.548912 containerd[1447]: time="2025-09-10T23:26:16.548337171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:16.548912 containerd[1447]: time="2025-09-10T23:26:16.548347811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.548912 containerd[1447]: time="2025-09-10T23:26:16.548428208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.577778 systemd[1]: Started cri-containerd-779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948.scope - libcontainer container 779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948. Sep 10 23:26:16.579226 systemd[1]: Started cri-containerd-a151abb3c994fba810aed106f479ac9940cea5e66d4b031af7fea9ee9790f9df.scope - libcontainer container a151abb3c994fba810aed106f479ac9940cea5e66d4b031af7fea9ee9790f9df. Sep 10 23:26:16.607321 kubelet[2558]: I0910 23:26:16.606933 2558 status_manager.go:890] "Failed to get status for pod" podUID="0cf88d9a-f1e3-497a-925f-f5fa75f070b0" pod="kube-system/cilium-operator-6c4d7847fc-wlnsv" err="pods \"cilium-operator-6c4d7847fc-wlnsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 10 23:26:16.623674 systemd[1]: Created slice kubepods-besteffort-pod0cf88d9a_f1e3_497a_925f_f5fa75f070b0.slice - libcontainer container kubepods-besteffort-pod0cf88d9a_f1e3_497a_925f_f5fa75f070b0.slice. Sep 10 23:26:16.643902 kubelet[2558]: I0910 23:26:16.643867 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r78d\" (UniqueName: \"kubernetes.io/projected/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-kube-api-access-9r78d\") pod \"cilium-operator-6c4d7847fc-wlnsv\" (UID: \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\") " pod="kube-system/cilium-operator-6c4d7847fc-wlnsv" Sep 10 23:26:16.643902 kubelet[2558]: I0910 23:26:16.643909 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wlnsv\" (UID: \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\") " pod="kube-system/cilium-operator-6c4d7847fc-wlnsv" Sep 10 23:26:16.647582 containerd[1447]: time="2025-09-10T23:26:16.647175572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8pc5,Uid:ffb4bf92-b9d9-4249-8d46-47c84c3389c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\"" Sep 10 23:26:16.650715 containerd[1447]: time="2025-09-10T23:26:16.650642596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 23:26:16.656043 containerd[1447]: time="2025-09-10T23:26:16.655985049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts8hc,Uid:f4ffea87-b1ef-4cbf-9b24-0200c098537b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a151abb3c994fba810aed106f479ac9940cea5e66d4b031af7fea9ee9790f9df\"" Sep 10 23:26:16.660362 containerd[1447]: time="2025-09-10T23:26:16.660317770Z" level=info msg="CreateContainer within sandbox \"a151abb3c994fba810aed106f479ac9940cea5e66d4b031af7fea9ee9790f9df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:26:16.710684 containerd[1447]: time="2025-09-10T23:26:16.710635786Z" level=info msg="CreateContainer within sandbox \"a151abb3c994fba810aed106f479ac9940cea5e66d4b031af7fea9ee9790f9df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36c1e21dd60af841dee2af0786bef2b10e616223aa14078a624a9fe0e7e752e3\"" Sep 10 23:26:16.711791 containerd[1447]: time="2025-09-10T23:26:16.711757195Z" level=info msg="StartContainer for \"36c1e21dd60af841dee2af0786bef2b10e616223aa14078a624a9fe0e7e752e3\"" Sep 10 23:26:16.738714 systemd[1]: Started cri-containerd-36c1e21dd60af841dee2af0786bef2b10e616223aa14078a624a9fe0e7e752e3.scope - libcontainer container 36c1e21dd60af841dee2af0786bef2b10e616223aa14078a624a9fe0e7e752e3. Sep 10 23:26:16.768184 containerd[1447]: time="2025-09-10T23:26:16.767994928Z" level=info msg="StartContainer for \"36c1e21dd60af841dee2af0786bef2b10e616223aa14078a624a9fe0e7e752e3\" returns successfully" Sep 10 23:26:16.932269 containerd[1447]: time="2025-09-10T23:26:16.932218010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wlnsv,Uid:0cf88d9a-f1e3-497a-925f-f5fa75f070b0,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:16.963178 containerd[1447]: time="2025-09-10T23:26:16.963079401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:16.963178 containerd[1447]: time="2025-09-10T23:26:16.963141279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:16.963178 containerd[1447]: time="2025-09-10T23:26:16.963161599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.963371 containerd[1447]: time="2025-09-10T23:26:16.963251676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:16.979702 systemd[1]: Started cri-containerd-8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747.scope - libcontainer container 8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747. Sep 10 23:26:17.005560 containerd[1447]: time="2025-09-10T23:26:17.005512760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wlnsv,Uid:0cf88d9a-f1e3-497a-925f-f5fa75f070b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\"" Sep 10 23:26:17.778240 kubelet[2558]: I0910 23:26:17.778178 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ts8hc" podStartSLOduration=1.778159147 podStartE2EDuration="1.778159147s" podCreationTimestamp="2025-09-10 23:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:17.777832475 +0000 UTC m=+6.135478310" watchObservedRunningTime="2025-09-10 23:26:17.778159147 +0000 UTC m=+6.135804982" Sep 10 23:26:26.105083 update_engine[1437]: I20250910 23:26:26.104943 1437 update_attempter.cc:509] Updating boot flags... Sep 10 23:26:26.136649 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2939) Sep 10 23:26:26.199803 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2941) Sep 10 23:26:32.220789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046283866.mount: Deactivated successfully. Sep 10 23:26:33.542117 containerd[1447]: time="2025-09-10T23:26:33.542034933Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 23:26:33.547242 containerd[1447]: time="2025-09-10T23:26:33.547205354Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 16.896521638s" Sep 10 23:26:33.547242 containerd[1447]: time="2025-09-10T23:26:33.547246113Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 23:26:33.550876 containerd[1447]: time="2025-09-10T23:26:33.550845152Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 23:26:33.554203 containerd[1447]: time="2025-09-10T23:26:33.553598561Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:26:33.567407 containerd[1447]: time="2025-09-10T23:26:33.567333203Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:26:33.568283 containerd[1447]: time="2025-09-10T23:26:33.568249953Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:26:33.576512 containerd[1447]: time="2025-09-10T23:26:33.576455819Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\"" Sep 10 23:26:33.577199 containerd[1447]: time="2025-09-10T23:26:33.576979893Z" level=info msg="StartContainer for \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\"" Sep 10 23:26:33.603766 systemd[1]: Started cri-containerd-5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665.scope - libcontainer container 5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665. Sep 10 23:26:33.630959 containerd[1447]: time="2025-09-10T23:26:33.630785476Z" level=info msg="StartContainer for \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\" returns successfully" Sep 10 23:26:33.639455 systemd[1]: cri-containerd-5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665.scope: Deactivated successfully. Sep 10 23:26:33.866829 containerd[1447]: time="2025-09-10T23:26:33.847726388Z" level=info msg="shim disconnected" id=5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665 namespace=k8s.io Sep 10 23:26:33.866829 containerd[1447]: time="2025-09-10T23:26:33.866762690Z" level=warning msg="cleaning up after shim disconnected" id=5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665 namespace=k8s.io Sep 10 23:26:33.866829 containerd[1447]: time="2025-09-10T23:26:33.866779530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:26:34.572932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665-rootfs.mount: Deactivated successfully. Sep 10 23:26:34.802447 containerd[1447]: time="2025-09-10T23:26:34.802181047Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:26:34.816678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428613784.mount: Deactivated successfully. Sep 10 23:26:34.832015 containerd[1447]: time="2025-09-10T23:26:34.831839842Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\"" Sep 10 23:26:34.836907 containerd[1447]: time="2025-09-10T23:26:34.836648910Z" level=info msg="StartContainer for \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\"" Sep 10 23:26:34.863855 systemd[1]: Started cri-containerd-2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee.scope - libcontainer container 2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee. Sep 10 23:26:34.890080 containerd[1447]: time="2025-09-10T23:26:34.890020204Z" level=info msg="StartContainer for \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\" returns successfully" Sep 10 23:26:34.901228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:26:34.901440 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:26:34.902034 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:26:34.910916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:26:34.911124 systemd[1]: cri-containerd-2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee.scope: Deactivated successfully. Sep 10 23:26:34.928608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:26:34.957336 containerd[1447]: time="2025-09-10T23:26:34.957283347Z" level=info msg="shim disconnected" id=2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee namespace=k8s.io Sep 10 23:26:34.957336 containerd[1447]: time="2025-09-10T23:26:34.957331427Z" level=warning msg="cleaning up after shim disconnected" id=2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee namespace=k8s.io Sep 10 23:26:34.957336 containerd[1447]: time="2025-09-10T23:26:34.957339106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:26:35.559544 containerd[1447]: time="2025-09-10T23:26:35.559486847Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:26:35.559990 containerd[1447]: time="2025-09-10T23:26:35.559942482Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 23:26:35.563848 containerd[1447]: time="2025-09-10T23:26:35.563807722Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:26:35.565244 containerd[1447]: time="2025-09-10T23:26:35.565209827Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.014330555s" Sep 10 23:26:35.565288 containerd[1447]: time="2025-09-10T23:26:35.565254386Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 23:26:35.567400 containerd[1447]: time="2025-09-10T23:26:35.567368084Z" level=info msg="CreateContainer within sandbox \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 23:26:35.582315 containerd[1447]: time="2025-09-10T23:26:35.582272928Z" level=info msg="CreateContainer within sandbox \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\"" Sep 10 23:26:35.583680 containerd[1447]: time="2025-09-10T23:26:35.583228278Z" level=info msg="StartContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\"" Sep 10 23:26:35.612721 systemd[1]: Started cri-containerd-fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92.scope - libcontainer container fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92. Sep 10 23:26:35.632860 containerd[1447]: time="2025-09-10T23:26:35.632804478Z" level=info msg="StartContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" returns successfully" Sep 10 23:26:35.805835 containerd[1447]: time="2025-09-10T23:26:35.805720864Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:26:35.816440 kubelet[2558]: I0910 23:26:35.816302 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wlnsv" podStartSLOduration=1.2566535079999999 podStartE2EDuration="19.816283513s" podCreationTimestamp="2025-09-10 23:26:16 +0000 UTC" firstStartedPulling="2025-09-10 23:26:17.006569252 +0000 UTC m=+5.364215087" lastFinishedPulling="2025-09-10 23:26:35.566199257 +0000 UTC m=+23.923845092" observedRunningTime="2025-09-10 23:26:35.815727239 +0000 UTC m=+24.173373354" watchObservedRunningTime="2025-09-10 23:26:35.816283513 +0000 UTC m=+24.173929348" Sep 10 23:26:35.831679 containerd[1447]: time="2025-09-10T23:26:35.830962359Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\"" Sep 10 23:26:35.831679 containerd[1447]: time="2025-09-10T23:26:35.831518633Z" level=info msg="StartContainer for \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\"" Sep 10 23:26:35.862713 systemd[1]: Started cri-containerd-a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5.scope - libcontainer container a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5. Sep 10 23:26:35.891889 containerd[1447]: time="2025-09-10T23:26:35.891492684Z" level=info msg="StartContainer for \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\" returns successfully" Sep 10 23:26:35.892992 systemd[1]: cri-containerd-a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5.scope: Deactivated successfully. Sep 10 23:26:35.983554 containerd[1447]: time="2025-09-10T23:26:35.983480638Z" level=info msg="shim disconnected" id=a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5 namespace=k8s.io Sep 10 23:26:35.983554 containerd[1447]: time="2025-09-10T23:26:35.983545958Z" level=warning msg="cleaning up after shim disconnected" id=a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5 namespace=k8s.io Sep 10 23:26:35.983554 containerd[1447]: time="2025-09-10T23:26:35.983554198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:26:36.811024 containerd[1447]: time="2025-09-10T23:26:36.810969314Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:26:36.823840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434132758.mount: Deactivated successfully. Sep 10 23:26:36.827623 containerd[1447]: time="2025-09-10T23:26:36.827483308Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\"" Sep 10 23:26:36.828813 containerd[1447]: time="2025-09-10T23:26:36.828611377Z" level=info msg="StartContainer for \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\"" Sep 10 23:26:36.854748 systemd[1]: Started cri-containerd-8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3.scope - libcontainer container 8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3. Sep 10 23:26:36.876458 systemd[1]: cri-containerd-8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3.scope: Deactivated successfully. Sep 10 23:26:36.880655 containerd[1447]: time="2025-09-10T23:26:36.880506575Z" level=info msg="StartContainer for \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\" returns successfully" Sep 10 23:26:36.928612 containerd[1447]: time="2025-09-10T23:26:36.928556852Z" level=info msg="shim disconnected" id=8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3 namespace=k8s.io Sep 10 23:26:36.928612 containerd[1447]: time="2025-09-10T23:26:36.928608972Z" level=warning msg="cleaning up after shim disconnected" id=8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3 namespace=k8s.io Sep 10 23:26:36.928612 containerd[1447]: time="2025-09-10T23:26:36.928616812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:26:37.572376 systemd[1]: run-containerd-runc-k8s.io-8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3-runc.ZhzNPT.mount: Deactivated successfully. Sep 10 23:26:37.572478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3-rootfs.mount: Deactivated successfully. Sep 10 23:26:37.819472 containerd[1447]: time="2025-09-10T23:26:37.819233514Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:26:37.840149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770222744.mount: Deactivated successfully. Sep 10 23:26:37.843185 containerd[1447]: time="2025-09-10T23:26:37.843116284Z" level=info msg="CreateContainer within sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\"" Sep 10 23:26:37.843801 containerd[1447]: time="2025-09-10T23:26:37.843767438Z" level=info msg="StartContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\"" Sep 10 23:26:37.903905 systemd[1]: Started cri-containerd-5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03.scope - libcontainer container 5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03. Sep 10 23:26:37.943733 containerd[1447]: time="2025-09-10T23:26:37.943682235Z" level=info msg="StartContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" returns successfully" Sep 10 23:26:38.069519 kubelet[2558]: I0910 23:26:38.068762 2558 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:26:38.105582 systemd[1]: Created slice kubepods-burstable-podb84d0a58_40c4_472f_bcae_cd9703ee3be6.slice - libcontainer container kubepods-burstable-podb84d0a58_40c4_472f_bcae_cd9703ee3be6.slice. Sep 10 23:26:38.117667 systemd[1]: Created slice kubepods-burstable-pod5a18cacb_2921_4eb9_bf30_089f4b0772ee.slice - libcontainer container kubepods-burstable-pod5a18cacb_2921_4eb9_bf30_089f4b0772ee.slice. Sep 10 23:26:38.194643 kubelet[2558]: I0910 23:26:38.194441 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59sj\" (UniqueName: \"kubernetes.io/projected/b84d0a58-40c4-472f-bcae-cd9703ee3be6-kube-api-access-k59sj\") pod \"coredns-668d6bf9bc-jd4fw\" (UID: \"b84d0a58-40c4-472f-bcae-cd9703ee3be6\") " pod="kube-system/coredns-668d6bf9bc-jd4fw" Sep 10 23:26:38.194643 kubelet[2558]: I0910 23:26:38.194514 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84d0a58-40c4-472f-bcae-cd9703ee3be6-config-volume\") pod \"coredns-668d6bf9bc-jd4fw\" (UID: \"b84d0a58-40c4-472f-bcae-cd9703ee3be6\") " pod="kube-system/coredns-668d6bf9bc-jd4fw" Sep 10 23:26:38.194643 kubelet[2558]: I0910 23:26:38.194549 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xcf\" (UniqueName: \"kubernetes.io/projected/5a18cacb-2921-4eb9-bf30-089f4b0772ee-kube-api-access-94xcf\") pod \"coredns-668d6bf9bc-bv27d\" (UID: \"5a18cacb-2921-4eb9-bf30-089f4b0772ee\") " pod="kube-system/coredns-668d6bf9bc-bv27d" Sep 10 23:26:38.194643 kubelet[2558]: I0910 23:26:38.194567 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a18cacb-2921-4eb9-bf30-089f4b0772ee-config-volume\") pod \"coredns-668d6bf9bc-bv27d\" (UID: \"5a18cacb-2921-4eb9-bf30-089f4b0772ee\") " pod="kube-system/coredns-668d6bf9bc-bv27d" Sep 10 23:26:38.418458 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:48728.service - OpenSSH per-connection server daemon (10.0.0.1:48728). Sep 10 23:26:38.419854 containerd[1447]: time="2025-09-10T23:26:38.419813409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd4fw,Uid:b84d0a58-40c4-472f-bcae-cd9703ee3be6,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:38.422538 containerd[1447]: time="2025-09-10T23:26:38.421624152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv27d,Uid:5a18cacb-2921-4eb9-bf30-089f4b0772ee,Namespace:kube-system,Attempt:0,}" Sep 10 23:26:38.467000 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 48728 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:38.468509 sshd-session[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:38.472949 systemd-logind[1435]: New session 8 of user core. Sep 10 23:26:38.484736 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:26:38.576095 systemd[1]: run-containerd-runc-k8s.io-5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03-runc.UgnNM7.mount: Deactivated successfully. Sep 10 23:26:38.621068 sshd[3405]: Connection closed by 10.0.0.1 port 48728 Sep 10 23:26:38.621423 sshd-session[3402]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:38.628812 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:48728.service: Deactivated successfully. Sep 10 23:26:38.630729 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:26:38.633040 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:26:38.634350 systemd-logind[1435]: Removed session 8. Sep 10 23:26:38.841990 kubelet[2558]: I0910 23:26:38.841902 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w8pc5" podStartSLOduration=5.940019799 podStartE2EDuration="22.841860306s" podCreationTimestamp="2025-09-10 23:26:16 +0000 UTC" firstStartedPulling="2025-09-10 23:26:16.648751808 +0000 UTC m=+5.006397603" lastFinishedPulling="2025-09-10 23:26:33.550592315 +0000 UTC m=+21.908238110" observedRunningTime="2025-09-10 23:26:38.84138947 +0000 UTC m=+27.199035305" watchObservedRunningTime="2025-09-10 23:26:38.841860306 +0000 UTC m=+27.199506141" Sep 10 23:26:40.030886 systemd-networkd[1368]: cilium_host: Link UP Sep 10 23:26:40.031077 systemd-networkd[1368]: cilium_net: Link UP Sep 10 23:26:40.031225 systemd-networkd[1368]: cilium_net: Gained carrier Sep 10 23:26:40.031351 systemd-networkd[1368]: cilium_host: Gained carrier Sep 10 23:26:40.116417 systemd-networkd[1368]: cilium_vxlan: Link UP Sep 10 23:26:40.116424 systemd-networkd[1368]: cilium_vxlan: Gained carrier Sep 10 23:26:40.372673 kernel: NET: Registered PF_ALG protocol family Sep 10 23:26:40.590692 systemd-networkd[1368]: cilium_host: Gained IPv6LL Sep 10 23:26:40.967136 systemd-networkd[1368]: lxc_health: Link UP Sep 10 23:26:40.978391 systemd-networkd[1368]: lxc_health: Gained carrier Sep 10 23:26:41.038664 systemd-networkd[1368]: cilium_net: Gained IPv6LL Sep 10 23:26:41.317558 kernel: eth0: renamed from tmp8b6c4 Sep 10 23:26:41.315878 systemd-networkd[1368]: lxc3aee901bd112: Link UP Sep 10 23:26:41.324109 systemd-networkd[1368]: lxccd7ccf4e67e1: Link UP Sep 10 23:26:41.342593 kernel: eth0: renamed from tmp94a5a Sep 10 23:26:41.347818 systemd-networkd[1368]: tmp94a5a: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:26:41.347911 systemd-networkd[1368]: tmp94a5a: Cannot enable IPv6, ignoring: No such file or directory Sep 10 23:26:41.347941 systemd-networkd[1368]: tmp94a5a: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Sep 10 23:26:41.347952 systemd-networkd[1368]: tmp94a5a: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Sep 10 23:26:41.347960 systemd-networkd[1368]: tmp94a5a: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Sep 10 23:26:41.347974 systemd-networkd[1368]: tmp94a5a: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Sep 10 23:26:41.350756 systemd-networkd[1368]: lxc3aee901bd112: Gained carrier Sep 10 23:26:41.352017 systemd-networkd[1368]: lxccd7ccf4e67e1: Gained carrier Sep 10 23:26:41.806690 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Sep 10 23:26:42.446667 systemd-networkd[1368]: lxc3aee901bd112: Gained IPv6LL Sep 10 23:26:42.894684 systemd-networkd[1368]: lxc_health: Gained IPv6LL Sep 10 23:26:43.343131 systemd-networkd[1368]: lxccd7ccf4e67e1: Gained IPv6LL Sep 10 23:26:43.636067 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:55000.service - OpenSSH per-connection server daemon (10.0.0.1:55000). Sep 10 23:26:43.686281 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 55000 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:43.687231 sshd-session[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:43.693480 systemd-logind[1435]: New session 9 of user core. Sep 10 23:26:43.696877 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:26:43.825826 sshd[3824]: Connection closed by 10.0.0.1 port 55000 Sep 10 23:26:43.826222 sshd-session[3822]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:43.830596 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:55000.service: Deactivated successfully. Sep 10 23:26:43.832645 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:26:43.834870 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:26:43.836068 systemd-logind[1435]: Removed session 9. Sep 10 23:26:45.139576 containerd[1447]: time="2025-09-10T23:26:45.139387772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:45.139576 containerd[1447]: time="2025-09-10T23:26:45.139451492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:45.139576 containerd[1447]: time="2025-09-10T23:26:45.139465932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:45.140439 containerd[1447]: time="2025-09-10T23:26:45.140252326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:26:45.140439 containerd[1447]: time="2025-09-10T23:26:45.140299846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:26:45.140439 containerd[1447]: time="2025-09-10T23:26:45.140321286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:45.140439 containerd[1447]: time="2025-09-10T23:26:45.140394565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:45.141065 containerd[1447]: time="2025-09-10T23:26:45.141009481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:26:45.156896 systemd[1]: run-containerd-runc-k8s.io-8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d-runc.mbWtWW.mount: Deactivated successfully. Sep 10 23:26:45.171748 systemd[1]: Started cri-containerd-8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d.scope - libcontainer container 8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d. Sep 10 23:26:45.172980 systemd[1]: Started cri-containerd-94a5acb34cef923594ebff58aa1600ea47b121a90e1934e63cd977bdbd8f2aae.scope - libcontainer container 94a5acb34cef923594ebff58aa1600ea47b121a90e1934e63cd977bdbd8f2aae. Sep 10 23:26:45.185140 systemd-resolved[1371]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:26:45.187406 systemd-resolved[1371]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:26:45.213230 containerd[1447]: time="2025-09-10T23:26:45.213182486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv27d,Uid:5a18cacb-2921-4eb9-bf30-089f4b0772ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d\"" Sep 10 23:26:45.213360 containerd[1447]: time="2025-09-10T23:26:45.213269806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd4fw,Uid:b84d0a58-40c4-472f-bcae-cd9703ee3be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"94a5acb34cef923594ebff58aa1600ea47b121a90e1934e63cd977bdbd8f2aae\"" Sep 10 23:26:45.217201 containerd[1447]: time="2025-09-10T23:26:45.217140258Z" level=info msg="CreateContainer within sandbox \"94a5acb34cef923594ebff58aa1600ea47b121a90e1934e63cd977bdbd8f2aae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:26:45.217320 containerd[1447]: time="2025-09-10T23:26:45.217140698Z" level=info msg="CreateContainer within sandbox \"8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:26:45.239295 containerd[1447]: time="2025-09-10T23:26:45.239241300Z" level=info msg="CreateContainer within sandbox \"94a5acb34cef923594ebff58aa1600ea47b121a90e1934e63cd977bdbd8f2aae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6123d7b096f4b6f51ded1d4bf62b47fca5f87d0b7fd722fe652030ce371f754d\"" Sep 10 23:26:45.241015 containerd[1447]: time="2025-09-10T23:26:45.240102854Z" level=info msg="StartContainer for \"6123d7b096f4b6f51ded1d4bf62b47fca5f87d0b7fd722fe652030ce371f754d\"" Sep 10 23:26:45.241159 containerd[1447]: time="2025-09-10T23:26:45.241110847Z" level=info msg="CreateContainer within sandbox \"8b6c4eafeadf6a71f5194a84b3e1a297b5f38439f2bc73c39487372bda19881d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42aa8ed65cb23b2350e4110921374ce7baad99eda1369877041d90ca9e14c4c6\"" Sep 10 23:26:45.241599 containerd[1447]: time="2025-09-10T23:26:45.241571844Z" level=info msg="StartContainer for \"42aa8ed65cb23b2350e4110921374ce7baad99eda1369877041d90ca9e14c4c6\"" Sep 10 23:26:45.271750 systemd[1]: Started cri-containerd-6123d7b096f4b6f51ded1d4bf62b47fca5f87d0b7fd722fe652030ce371f754d.scope - libcontainer container 6123d7b096f4b6f51ded1d4bf62b47fca5f87d0b7fd722fe652030ce371f754d. Sep 10 23:26:45.274795 systemd[1]: Started cri-containerd-42aa8ed65cb23b2350e4110921374ce7baad99eda1369877041d90ca9e14c4c6.scope - libcontainer container 42aa8ed65cb23b2350e4110921374ce7baad99eda1369877041d90ca9e14c4c6. Sep 10 23:26:45.301728 containerd[1447]: time="2025-09-10T23:26:45.301675695Z" level=info msg="StartContainer for \"6123d7b096f4b6f51ded1d4bf62b47fca5f87d0b7fd722fe652030ce371f754d\" returns successfully" Sep 10 23:26:45.311046 containerd[1447]: time="2025-09-10T23:26:45.311001989Z" level=info msg="StartContainer for \"42aa8ed65cb23b2350e4110921374ce7baad99eda1369877041d90ca9e14c4c6\" returns successfully" Sep 10 23:26:45.849546 kubelet[2558]: I0910 23:26:45.847684 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jd4fw" podStartSLOduration=29.847666001 podStartE2EDuration="29.847666001s" podCreationTimestamp="2025-09-10 23:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:45.847397603 +0000 UTC m=+34.205043438" watchObservedRunningTime="2025-09-10 23:26:45.847666001 +0000 UTC m=+34.205311836" Sep 10 23:26:45.876260 kubelet[2558]: I0910 23:26:45.876191 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bv27d" podStartSLOduration=29.876170078 podStartE2EDuration="29.876170078s" podCreationTimestamp="2025-09-10 23:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:26:45.875965 +0000 UTC m=+34.233610834" watchObservedRunningTime="2025-09-10 23:26:45.876170078 +0000 UTC m=+34.233815913" Sep 10 23:26:48.854291 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Sep 10 23:26:48.900348 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:48.902294 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:48.907495 systemd-logind[1435]: New session 10 of user core. Sep 10 23:26:48.916767 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:26:49.038982 sshd[4019]: Connection closed by 10.0.0.1 port 55016 Sep 10 23:26:49.039560 sshd-session[4017]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:49.042990 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:55016.service: Deactivated successfully. Sep 10 23:26:49.044831 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:26:49.045543 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:26:49.046304 systemd-logind[1435]: Removed session 10. Sep 10 23:26:54.064904 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:37102.service - OpenSSH per-connection server daemon (10.0.0.1:37102). Sep 10 23:26:54.111679 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 37102 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:54.112741 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:54.120157 systemd-logind[1435]: New session 11 of user core. Sep 10 23:26:54.122764 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:26:54.243862 sshd[4037]: Connection closed by 10.0.0.1 port 37102 Sep 10 23:26:54.245364 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:54.256062 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:37102.service: Deactivated successfully. Sep 10 23:26:54.258750 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:26:54.259688 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:26:54.270267 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:37112.service - OpenSSH per-connection server daemon (10.0.0.1:37112). Sep 10 23:26:54.271925 systemd-logind[1435]: Removed session 11. Sep 10 23:26:54.307675 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 37112 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:54.309062 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:54.315544 systemd-logind[1435]: New session 12 of user core. Sep 10 23:26:54.329912 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:26:54.520084 sshd[4053]: Connection closed by 10.0.0.1 port 37112 Sep 10 23:26:54.520707 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:54.539492 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:37120.service - OpenSSH per-connection server daemon (10.0.0.1:37120). Sep 10 23:26:54.542133 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:37112.service: Deactivated successfully. Sep 10 23:26:54.545858 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:26:54.548489 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:26:54.554709 systemd-logind[1435]: Removed session 12. Sep 10 23:26:54.588144 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 37120 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:54.589866 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:54.595252 systemd-logind[1435]: New session 13 of user core. Sep 10 23:26:54.609821 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:26:54.727635 sshd[4067]: Connection closed by 10.0.0.1 port 37120 Sep 10 23:26:54.728091 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:54.731606 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:37120.service: Deactivated successfully. Sep 10 23:26:54.734889 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:26:54.735923 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:26:54.736904 systemd-logind[1435]: Removed session 13. Sep 10 23:26:59.741194 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:37136.service - OpenSSH per-connection server daemon (10.0.0.1:37136). Sep 10 23:26:59.780018 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 37136 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:26:59.781778 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:26:59.785341 systemd-logind[1435]: New session 14 of user core. Sep 10 23:26:59.795692 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:26:59.907422 sshd[4082]: Connection closed by 10.0.0.1 port 37136 Sep 10 23:26:59.907987 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 10 23:26:59.911124 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:26:59.911412 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:37136.service: Deactivated successfully. Sep 10 23:26:59.913112 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:26:59.914111 systemd-logind[1435]: Removed session 14. Sep 10 23:27:04.932831 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:34080.service - OpenSSH per-connection server daemon (10.0.0.1:34080). Sep 10 23:27:04.975302 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 34080 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:04.981343 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:04.989402 systemd-logind[1435]: New session 15 of user core. Sep 10 23:27:04.996753 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:27:05.118671 sshd[4097]: Connection closed by 10.0.0.1 port 34080 Sep 10 23:27:05.119362 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:05.133219 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:34080.service: Deactivated successfully. Sep 10 23:27:05.135997 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:27:05.137300 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:27:05.143846 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:34082.service - OpenSSH per-connection server daemon (10.0.0.1:34082). Sep 10 23:27:05.144728 systemd-logind[1435]: Removed session 15. Sep 10 23:27:05.189705 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 34082 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:05.190290 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:05.194596 systemd-logind[1435]: New session 16 of user core. Sep 10 23:27:05.208751 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:27:05.396065 sshd[4112]: Connection closed by 10.0.0.1 port 34082 Sep 10 23:27:05.396646 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:05.406864 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:34082.service: Deactivated successfully. Sep 10 23:27:05.408752 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:27:05.409404 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:27:05.420857 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:34090.service - OpenSSH per-connection server daemon (10.0.0.1:34090). Sep 10 23:27:05.421822 systemd-logind[1435]: Removed session 16. Sep 10 23:27:05.462421 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 34090 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:05.463626 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:05.468046 systemd-logind[1435]: New session 17 of user core. Sep 10 23:27:05.474729 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:27:06.103271 sshd[4125]: Connection closed by 10.0.0.1 port 34090 Sep 10 23:27:06.103771 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:06.113354 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:34090.service: Deactivated successfully. Sep 10 23:27:06.115629 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:27:06.119431 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:27:06.128151 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:34106.service - OpenSSH per-connection server daemon (10.0.0.1:34106). Sep 10 23:27:06.131012 systemd-logind[1435]: Removed session 17. Sep 10 23:27:06.172496 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 34106 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:06.173950 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:06.178446 systemd-logind[1435]: New session 18 of user core. Sep 10 23:27:06.187688 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:27:06.407389 sshd[4146]: Connection closed by 10.0.0.1 port 34106 Sep 10 23:27:06.407727 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:06.420304 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:34106.service: Deactivated successfully. Sep 10 23:27:06.423082 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:27:06.425087 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:27:06.433836 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:34110.service - OpenSSH per-connection server daemon (10.0.0.1:34110). Sep 10 23:27:06.435105 systemd-logind[1435]: Removed session 18. Sep 10 23:27:06.476286 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 34110 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:06.477823 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:06.486941 systemd-logind[1435]: New session 19 of user core. Sep 10 23:27:06.493714 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:27:06.603217 sshd[4159]: Connection closed by 10.0.0.1 port 34110 Sep 10 23:27:06.603598 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:06.607029 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:34110.service: Deactivated successfully. Sep 10 23:27:06.608779 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:27:06.609440 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:27:06.610150 systemd-logind[1435]: Removed session 19. Sep 10 23:27:11.615008 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:37460.service - OpenSSH per-connection server daemon (10.0.0.1:37460). Sep 10 23:27:11.657938 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 37460 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:11.659342 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:11.664237 systemd-logind[1435]: New session 20 of user core. Sep 10 23:27:11.676703 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:27:11.800406 sshd[4178]: Connection closed by 10.0.0.1 port 37460 Sep 10 23:27:11.800753 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:11.803857 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:37460.service: Deactivated successfully. Sep 10 23:27:11.805496 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:27:11.806216 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:27:11.807032 systemd-logind[1435]: Removed session 20. Sep 10 23:27:16.828937 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:37470.service - OpenSSH per-connection server daemon (10.0.0.1:37470). Sep 10 23:27:16.866075 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 37470 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:16.867910 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:16.874224 systemd-logind[1435]: New session 21 of user core. Sep 10 23:27:16.882700 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:27:16.991536 sshd[4197]: Connection closed by 10.0.0.1 port 37470 Sep 10 23:27:16.991972 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:16.995296 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:37470.service: Deactivated successfully. Sep 10 23:27:16.997059 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:27:16.999242 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:27:17.000314 systemd-logind[1435]: Removed session 21. Sep 10 23:27:22.003994 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:33380.service - OpenSSH per-connection server daemon (10.0.0.1:33380). Sep 10 23:27:22.042621 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 33380 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:22.043848 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:22.047733 systemd-logind[1435]: New session 22 of user core. Sep 10 23:27:22.055710 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:27:22.167411 sshd[4214]: Connection closed by 10.0.0.1 port 33380 Sep 10 23:27:22.167808 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:22.179956 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:33380.service: Deactivated successfully. Sep 10 23:27:22.183102 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:27:22.183956 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:27:22.190937 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). Sep 10 23:27:22.192333 systemd-logind[1435]: Removed session 22. Sep 10 23:27:22.227364 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:22.228695 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:22.232573 systemd-logind[1435]: New session 23 of user core. Sep 10 23:27:22.239704 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:27:24.840180 containerd[1447]: time="2025-09-10T23:27:24.840030462Z" level=info msg="StopContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" with timeout 30 (s)" Sep 10 23:27:24.841122 containerd[1447]: time="2025-09-10T23:27:24.840906517Z" level=info msg="Stop container \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" with signal terminated" Sep 10 23:27:24.851832 systemd[1]: cri-containerd-fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92.scope: Deactivated successfully. Sep 10 23:27:24.863989 containerd[1447]: time="2025-09-10T23:27:24.863940926Z" level=info msg="StopContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" with timeout 2 (s)" Sep 10 23:27:24.864540 containerd[1447]: time="2025-09-10T23:27:24.864411935Z" level=info msg="Stop container \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" with signal terminated" Sep 10 23:27:24.868951 containerd[1447]: time="2025-09-10T23:27:24.868897454Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:27:24.870711 systemd-networkd[1368]: lxc_health: Link DOWN Sep 10 23:27:24.870717 systemd-networkd[1368]: lxc_health: Lost carrier Sep 10 23:27:24.890667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92-rootfs.mount: Deactivated successfully. Sep 10 23:27:24.893280 systemd[1]: cri-containerd-5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03.scope: Deactivated successfully. Sep 10 23:27:24.893768 systemd[1]: cri-containerd-5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03.scope: Consumed 6.450s CPU time, 124.3M memory peak, 148K read from disk, 12.9M written to disk. Sep 10 23:27:24.906382 containerd[1447]: time="2025-09-10T23:27:24.906318279Z" level=info msg="shim disconnected" id=fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92 namespace=k8s.io Sep 10 23:27:24.906382 containerd[1447]: time="2025-09-10T23:27:24.906373320Z" level=warning msg="cleaning up after shim disconnected" id=fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92 namespace=k8s.io Sep 10 23:27:24.906382 containerd[1447]: time="2025-09-10T23:27:24.906385240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:24.913851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03-rootfs.mount: Deactivated successfully. Sep 10 23:27:24.917705 containerd[1447]: time="2025-09-10T23:27:24.917467637Z" level=info msg="shim disconnected" id=5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03 namespace=k8s.io Sep 10 23:27:24.917705 containerd[1447]: time="2025-09-10T23:27:24.917558479Z" level=warning msg="cleaning up after shim disconnected" id=5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03 namespace=k8s.io Sep 10 23:27:24.917705 containerd[1447]: time="2025-09-10T23:27:24.917569159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:24.936032 containerd[1447]: time="2025-09-10T23:27:24.935975286Z" level=info msg="StopContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" returns successfully" Sep 10 23:27:24.936588 containerd[1447]: time="2025-09-10T23:27:24.936562377Z" level=info msg="StopPodSandbox for \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\"" Sep 10 23:27:24.936654 containerd[1447]: time="2025-09-10T23:27:24.936605577Z" level=info msg="Container to stop \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.938583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747-shm.mount: Deactivated successfully. Sep 10 23:27:24.943788 containerd[1447]: time="2025-09-10T23:27:24.943734544Z" level=info msg="StopContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" returns successfully" Sep 10 23:27:24.944314 containerd[1447]: time="2025-09-10T23:27:24.944287314Z" level=info msg="StopPodSandbox for \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\"" Sep 10 23:27:24.944367 containerd[1447]: time="2025-09-10T23:27:24.944327394Z" level=info msg="Container to stop \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.944367 containerd[1447]: time="2025-09-10T23:27:24.944339955Z" level=info msg="Container to stop \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.944367 containerd[1447]: time="2025-09-10T23:27:24.944349955Z" level=info msg="Container to stop \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.944367 containerd[1447]: time="2025-09-10T23:27:24.944360675Z" level=info msg="Container to stop \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.944456 containerd[1447]: time="2025-09-10T23:27:24.944369995Z" level=info msg="Container to stop \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:27:24.945199 systemd[1]: cri-containerd-8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747.scope: Deactivated successfully. Sep 10 23:27:24.949033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948-shm.mount: Deactivated successfully. Sep 10 23:27:24.961595 systemd[1]: cri-containerd-779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948.scope: Deactivated successfully. Sep 10 23:27:24.984029 containerd[1447]: time="2025-09-10T23:27:24.983966579Z" level=info msg="shim disconnected" id=8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747 namespace=k8s.io Sep 10 23:27:24.984029 containerd[1447]: time="2025-09-10T23:27:24.984021460Z" level=warning msg="cleaning up after shim disconnected" id=8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747 namespace=k8s.io Sep 10 23:27:24.984029 containerd[1447]: time="2025-09-10T23:27:24.984030260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:24.990674 containerd[1447]: time="2025-09-10T23:27:24.990393373Z" level=info msg="shim disconnected" id=779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948 namespace=k8s.io Sep 10 23:27:24.990674 containerd[1447]: time="2025-09-10T23:27:24.990460054Z" level=warning msg="cleaning up after shim disconnected" id=779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948 namespace=k8s.io Sep 10 23:27:24.990674 containerd[1447]: time="2025-09-10T23:27:24.990468654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:24.997830 containerd[1447]: time="2025-09-10T23:27:24.997748343Z" level=warning msg="cleanup warnings time=\"2025-09-10T23:27:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 23:27:24.998870 containerd[1447]: time="2025-09-10T23:27:24.998801762Z" level=info msg="TearDown network for sandbox \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\" successfully" Sep 10 23:27:24.998870 containerd[1447]: time="2025-09-10T23:27:24.998835643Z" level=info msg="StopPodSandbox for \"8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747\" returns successfully" Sep 10 23:27:25.014557 containerd[1447]: time="2025-09-10T23:27:25.014245069Z" level=info msg="TearDown network for sandbox \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" successfully" Sep 10 23:27:25.014557 containerd[1447]: time="2025-09-10T23:27:25.014281990Z" level=info msg="StopPodSandbox for \"779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948\" returns successfully" Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097445 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hubble-tls\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097489 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-cilium-config-path\") pod \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\" (UID: \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\") " Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097511 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r78d\" (UniqueName: \"kubernetes.io/projected/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-kube-api-access-9r78d\") pod \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\" (UID: \"0cf88d9a-f1e3-497a-925f-f5fa75f070b0\") " Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097549 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-xtables-lock\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097567 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hostproc\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.097588 kubelet[2558]: I0910 23:27:25.097582 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cni-path\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097605 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-clustermesh-secrets\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097621 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-run\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097637 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-kernel\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097653 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-config-path\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097671 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn8kx\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-kube-api-access-pn8kx\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098141 kubelet[2558]: I0910 23:27:25.097687 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-bpf-maps\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098266 kubelet[2558]: I0910 23:27:25.097704 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-lib-modules\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098266 kubelet[2558]: I0910 23:27:25.097717 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-etc-cni-netd\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098266 kubelet[2558]: I0910 23:27:25.097731 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-cgroup\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.098266 kubelet[2558]: I0910 23:27:25.097754 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-net\") pod \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\" (UID: \"ffb4bf92-b9d9-4249-8d46-47c84c3389c4\") " Sep 10 23:27:25.102267 kubelet[2558]: I0910 23:27:25.102163 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.102267 kubelet[2558]: I0910 23:27:25.102230 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.104562 kubelet[2558]: I0910 23:27:25.103620 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.104562 kubelet[2558]: I0910 23:27:25.103654 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.104562 kubelet[2558]: I0910 23:27:25.103665 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.104562 kubelet[2558]: I0910 23:27:25.103729 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.104562 kubelet[2558]: I0910 23:27:25.103762 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.105005 kubelet[2558]: I0910 23:27:25.104968 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.105069 kubelet[2558]: I0910 23:27:25.105015 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.105069 kubelet[2558]: I0910 23:27:25.105044 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:27:25.105241 kubelet[2558]: I0910 23:27:25.105213 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:27:25.108510 kubelet[2558]: I0910 23:27:25.108454 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:27:25.109026 kubelet[2558]: I0910 23:27:25.108985 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-kube-api-access-9r78d" (OuterVolumeSpecName: "kube-api-access-9r78d") pod "0cf88d9a-f1e3-497a-925f-f5fa75f070b0" (UID: "0cf88d9a-f1e3-497a-925f-f5fa75f070b0"). InnerVolumeSpecName "kube-api-access-9r78d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:27:25.109079 kubelet[2558]: I0910 23:27:25.109063 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-kube-api-access-pn8kx" (OuterVolumeSpecName: "kube-api-access-pn8kx") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "kube-api-access-pn8kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:27:25.109247 kubelet[2558]: I0910 23:27:25.109209 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0cf88d9a-f1e3-497a-925f-f5fa75f070b0" (UID: "0cf88d9a-f1e3-497a-925f-f5fa75f070b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:27:25.110360 kubelet[2558]: I0910 23:27:25.110324 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffb4bf92-b9d9-4249-8d46-47c84c3389c4" (UID: "ffb4bf92-b9d9-4249-8d46-47c84c3389c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:27:25.198806 kubelet[2558]: I0910 23:27:25.198749 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pn8kx\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-kube-api-access-pn8kx\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199520 kubelet[2558]: I0910 23:27:25.199485 2558 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199520 kubelet[2558]: I0910 23:27:25.199514 2558 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199520 kubelet[2558]: I0910 23:27:25.199536 2558 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199545 2558 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199555 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199563 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199571 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9r78d\" (UniqueName: \"kubernetes.io/projected/0cf88d9a-f1e3-497a-925f-f5fa75f070b0-kube-api-access-9r78d\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199579 2558 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199587 2558 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199594 2558 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199644 kubelet[2558]: I0910 23:27:25.199602 2558 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199811 kubelet[2558]: I0910 23:27:25.199610 2558 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199811 kubelet[2558]: I0910 23:27:25.199618 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199811 kubelet[2558]: I0910 23:27:25.199625 2558 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.199811 kubelet[2558]: I0910 23:27:25.199633 2558 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4bf92-b9d9-4249-8d46-47c84c3389c4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 23:27:25.741258 systemd[1]: Removed slice kubepods-besteffort-pod0cf88d9a_f1e3_497a_925f_f5fa75f070b0.slice - libcontainer container kubepods-besteffort-pod0cf88d9a_f1e3_497a_925f_f5fa75f070b0.slice. Sep 10 23:27:25.742420 systemd[1]: Removed slice kubepods-burstable-podffb4bf92_b9d9_4249_8d46_47c84c3389c4.slice - libcontainer container kubepods-burstable-podffb4bf92_b9d9_4249_8d46_47c84c3389c4.slice. Sep 10 23:27:25.742519 systemd[1]: kubepods-burstable-podffb4bf92_b9d9_4249_8d46_47c84c3389c4.slice: Consumed 6.531s CPU time, 124.6M memory peak, 164K read from disk, 12.9M written to disk. Sep 10 23:27:25.841587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e07fb59cee7f449e57356c7d22fb46bdc8b96e9e09df5b77555d25ab92f8747-rootfs.mount: Deactivated successfully. Sep 10 23:27:25.841693 systemd[1]: var-lib-kubelet-pods-0cf88d9a\x2df1e3\x2d497a\x2d925f\x2df5fa75f070b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9r78d.mount: Deactivated successfully. Sep 10 23:27:25.841749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-779c93d8bd9cd37089aa2f7d7bdf0d05916916e94ece03efbfebf464b1335948-rootfs.mount: Deactivated successfully. Sep 10 23:27:25.841832 systemd[1]: var-lib-kubelet-pods-ffb4bf92\x2db9d9\x2d4249\x2d8d46\x2d47c84c3389c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpn8kx.mount: Deactivated successfully. Sep 10 23:27:25.841890 systemd[1]: var-lib-kubelet-pods-ffb4bf92\x2db9d9\x2d4249\x2d8d46\x2d47c84c3389c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 23:27:25.841949 systemd[1]: var-lib-kubelet-pods-ffb4bf92\x2db9d9\x2d4249\x2d8d46\x2d47c84c3389c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 23:27:25.929782 kubelet[2558]: I0910 23:27:25.929723 2558 scope.go:117] "RemoveContainer" containerID="5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03" Sep 10 23:27:25.933017 containerd[1447]: time="2025-09-10T23:27:25.932973102Z" level=info msg="RemoveContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\"" Sep 10 23:27:25.941534 containerd[1447]: time="2025-09-10T23:27:25.941477528Z" level=info msg="RemoveContainer for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" returns successfully" Sep 10 23:27:25.942071 kubelet[2558]: I0910 23:27:25.942045 2558 scope.go:117] "RemoveContainer" containerID="8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3" Sep 10 23:27:25.943666 containerd[1447]: time="2025-09-10T23:27:25.943634765Z" level=info msg="RemoveContainer for \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\"" Sep 10 23:27:25.961100 containerd[1447]: time="2025-09-10T23:27:25.961023344Z" level=info msg="RemoveContainer for \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\" returns successfully" Sep 10 23:27:25.961890 kubelet[2558]: I0910 23:27:25.961357 2558 scope.go:117] "RemoveContainer" containerID="a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5" Sep 10 23:27:25.965717 containerd[1447]: time="2025-09-10T23:27:25.965644184Z" level=info msg="RemoveContainer for \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\"" Sep 10 23:27:25.975562 containerd[1447]: time="2025-09-10T23:27:25.975433312Z" level=info msg="RemoveContainer for \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\" returns successfully" Sep 10 23:27:25.975890 kubelet[2558]: I0910 23:27:25.975859 2558 scope.go:117] "RemoveContainer" containerID="2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee" Sep 10 23:27:25.977048 containerd[1447]: time="2025-09-10T23:27:25.977024459Z" level=info msg="RemoveContainer for \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\"" Sep 10 23:27:25.979797 containerd[1447]: time="2025-09-10T23:27:25.979747506Z" level=info msg="RemoveContainer for \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\" returns successfully" Sep 10 23:27:25.980040 kubelet[2558]: I0910 23:27:25.980018 2558 scope.go:117] "RemoveContainer" containerID="5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665" Sep 10 23:27:25.981127 containerd[1447]: time="2025-09-10T23:27:25.981095969Z" level=info msg="RemoveContainer for \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\"" Sep 10 23:27:25.991768 containerd[1447]: time="2025-09-10T23:27:25.991625830Z" level=info msg="RemoveContainer for \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\" returns successfully" Sep 10 23:27:25.992735 kubelet[2558]: I0910 23:27:25.992460 2558 scope.go:117] "RemoveContainer" containerID="5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03" Sep 10 23:27:25.992918 kubelet[2558]: E0910 23:27:25.992843 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\": not found" containerID="5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03" Sep 10 23:27:25.992960 containerd[1447]: time="2025-09-10T23:27:25.992715409Z" level=error msg="ContainerStatus for \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\": not found" Sep 10 23:27:25.997400 kubelet[2558]: I0910 23:27:25.997249 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03"} err="failed to get container status \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f528f51e718b22be53ca78d09b77a6abdaa1ca565aefa62cf55904c692dcf03\": not found" Sep 10 23:27:25.997400 kubelet[2558]: I0910 23:27:25.997377 2558 scope.go:117] "RemoveContainer" containerID="8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3" Sep 10 23:27:25.997904 containerd[1447]: time="2025-09-10T23:27:25.997639934Z" level=error msg="ContainerStatus for \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\": not found" Sep 10 23:27:25.997962 kubelet[2558]: E0910 23:27:25.997776 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\": not found" containerID="8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3" Sep 10 23:27:25.997962 kubelet[2558]: I0910 23:27:25.997807 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3"} err="failed to get container status \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e1fea60f65290679a1c6c84794e400e781f04b2f6d81ccbe89cd736cde640b3\": not found" Sep 10 23:27:25.997962 kubelet[2558]: I0910 23:27:25.997828 2558 scope.go:117] "RemoveContainer" containerID="a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5" Sep 10 23:27:25.998055 containerd[1447]: time="2025-09-10T23:27:25.997995420Z" level=error msg="ContainerStatus for \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\": not found" Sep 10 23:27:25.998328 kubelet[2558]: E0910 23:27:25.998110 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\": not found" containerID="a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5" Sep 10 23:27:25.998328 kubelet[2558]: I0910 23:27:25.998133 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5"} err="failed to get container status \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a13a7b73a2c093f572808bc5df8e515aa338ef9048d4764001310586f61376e5\": not found" Sep 10 23:27:25.998328 kubelet[2558]: I0910 23:27:25.998150 2558 scope.go:117] "RemoveContainer" containerID="2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee" Sep 10 23:27:25.998430 containerd[1447]: time="2025-09-10T23:27:25.998331426Z" level=error msg="ContainerStatus for \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\": not found" Sep 10 23:27:26.007089 kubelet[2558]: E0910 23:27:26.007024 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\": not found" containerID="2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee" Sep 10 23:27:26.007089 kubelet[2558]: I0910 23:27:26.007073 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee"} err="failed to get container status \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"2812baf25d29933460ccf26a85a95313d51e50930df6966743060112511223ee\": not found" Sep 10 23:27:26.007089 kubelet[2558]: I0910 23:27:26.007098 2558 scope.go:117] "RemoveContainer" containerID="5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665" Sep 10 23:27:26.007432 containerd[1447]: time="2025-09-10T23:27:26.007384298Z" level=error msg="ContainerStatus for \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\": not found" Sep 10 23:27:26.007595 kubelet[2558]: E0910 23:27:26.007565 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\": not found" containerID="5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665" Sep 10 23:27:26.007595 kubelet[2558]: I0910 23:27:26.007589 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665"} err="failed to get container status \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f786e2b3d6796f06cfdb3b66aafcbc28d4691509bdf1a5145ca3ea03fcf3665\": not found" Sep 10 23:27:26.007944 kubelet[2558]: I0910 23:27:26.007603 2558 scope.go:117] "RemoveContainer" containerID="fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92" Sep 10 23:27:26.008966 containerd[1447]: time="2025-09-10T23:27:26.008935564Z" level=info msg="RemoveContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\"" Sep 10 23:27:26.011454 containerd[1447]: time="2025-09-10T23:27:26.011416365Z" level=info msg="RemoveContainer for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" returns successfully" Sep 10 23:27:26.011649 kubelet[2558]: I0910 23:27:26.011625 2558 scope.go:117] "RemoveContainer" containerID="fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92" Sep 10 23:27:26.011966 containerd[1447]: time="2025-09-10T23:27:26.011843732Z" level=error msg="ContainerStatus for \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\": not found" Sep 10 23:27:26.012007 kubelet[2558]: E0910 23:27:26.011983 2558 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\": not found" containerID="fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92" Sep 10 23:27:26.012032 kubelet[2558]: I0910 23:27:26.012009 2558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92"} err="failed to get container status \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbb138fd7afbb184c419f22f93dd0f97f68af54bf5de9b5a8521c8dfadb18a92\": not found" Sep 10 23:27:26.778083 sshd[4229]: Connection closed by 10.0.0.1 port 33384 Sep 10 23:27:26.779239 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:26.787762 kubelet[2558]: E0910 23:27:26.787705 2558 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:27:26.793742 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:33384.service: Deactivated successfully. Sep 10 23:27:26.796286 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:27:26.796905 systemd[1]: session-23.scope: Consumed 1.900s CPU time, 30M memory peak. Sep 10 23:27:26.797790 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:27:26.809957 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:33390.service - OpenSSH per-connection server daemon (10.0.0.1:33390). Sep 10 23:27:26.813057 systemd-logind[1435]: Removed session 23. Sep 10 23:27:26.863862 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 33390 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:26.865373 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:26.870088 systemd-logind[1435]: New session 24 of user core. Sep 10 23:27:26.875782 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:27:27.735092 kubelet[2558]: I0910 23:27:27.735037 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf88d9a-f1e3-497a-925f-f5fa75f070b0" path="/var/lib/kubelet/pods/0cf88d9a-f1e3-497a-925f-f5fa75f070b0/volumes" Sep 10 23:27:27.735541 kubelet[2558]: I0910 23:27:27.735501 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb4bf92-b9d9-4249-8d46-47c84c3389c4" path="/var/lib/kubelet/pods/ffb4bf92-b9d9-4249-8d46-47c84c3389c4/volumes" Sep 10 23:27:27.778037 sshd[4394]: Connection closed by 10.0.0.1 port 33390 Sep 10 23:27:27.778496 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:27.789044 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:33390.service: Deactivated successfully. Sep 10 23:27:27.794472 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:27:27.797298 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:27:27.807911 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:33392.service - OpenSSH per-connection server daemon (10.0.0.1:33392). Sep 10 23:27:27.809793 systemd-logind[1435]: Removed session 24. Sep 10 23:27:27.813611 kubelet[2558]: I0910 23:27:27.812813 2558 memory_manager.go:355] "RemoveStaleState removing state" podUID="0cf88d9a-f1e3-497a-925f-f5fa75f070b0" containerName="cilium-operator" Sep 10 23:27:27.813611 kubelet[2558]: I0910 23:27:27.812851 2558 memory_manager.go:355] "RemoveStaleState removing state" podUID="ffb4bf92-b9d9-4249-8d46-47c84c3389c4" containerName="cilium-agent" Sep 10 23:27:27.833345 systemd[1]: Created slice kubepods-burstable-pod339b947e_36c1_41cb_be17_ee7cecad4c35.slice - libcontainer container kubepods-burstable-pod339b947e_36c1_41cb_be17_ee7cecad4c35.slice. Sep 10 23:27:27.856062 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 33392 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:27.857582 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:27.861553 systemd-logind[1435]: New session 25 of user core. Sep 10 23:27:27.874756 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:27:27.913133 kubelet[2558]: I0910 23:27:27.913082 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-etc-cni-netd\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913133 kubelet[2558]: I0910 23:27:27.913126 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/339b947e-36c1-41cb-be17-ee7cecad4c35-cilium-ipsec-secrets\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913133 kubelet[2558]: I0910 23:27:27.913147 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-bpf-maps\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913333 kubelet[2558]: I0910 23:27:27.913199 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-hostproc\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913333 kubelet[2558]: I0910 23:27:27.913236 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-cni-path\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913333 kubelet[2558]: I0910 23:27:27.913271 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-lib-modules\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913333 kubelet[2558]: I0910 23:27:27.913289 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-host-proc-sys-net\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913333 kubelet[2558]: I0910 23:27:27.913308 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-cilium-run\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913456 kubelet[2558]: I0910 23:27:27.913338 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339b947e-36c1-41cb-be17-ee7cecad4c35-cilium-config-path\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913456 kubelet[2558]: I0910 23:27:27.913357 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/339b947e-36c1-41cb-be17-ee7cecad4c35-clustermesh-secrets\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913456 kubelet[2558]: I0910 23:27:27.913373 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-host-proc-sys-kernel\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913456 kubelet[2558]: I0910 23:27:27.913396 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-cilium-cgroup\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913456 kubelet[2558]: I0910 23:27:27.913436 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/339b947e-36c1-41cb-be17-ee7cecad4c35-hubble-tls\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913596 kubelet[2558]: I0910 23:27:27.913452 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svx6b\" (UniqueName: \"kubernetes.io/projected/339b947e-36c1-41cb-be17-ee7cecad4c35-kube-api-access-svx6b\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.913596 kubelet[2558]: I0910 23:27:27.913488 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339b947e-36c1-41cb-be17-ee7cecad4c35-xtables-lock\") pod \"cilium-pf8fl\" (UID: \"339b947e-36c1-41cb-be17-ee7cecad4c35\") " pod="kube-system/cilium-pf8fl" Sep 10 23:27:27.930164 sshd[4408]: Connection closed by 10.0.0.1 port 33392 Sep 10 23:27:27.930409 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:27.941681 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:33392.service: Deactivated successfully. Sep 10 23:27:27.944438 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:27:27.949824 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:27:27.962120 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:33404.service - OpenSSH per-connection server daemon (10.0.0.1:33404). Sep 10 23:27:27.963766 systemd-logind[1435]: Removed session 25. Sep 10 23:27:28.004178 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 33404 ssh2: RSA SHA256:vI2v+Kj925DhJN+VWmdLDSx5Cqw/fvuZ8IHXlsQiGm4 Sep 10 23:27:28.005970 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:27:28.011021 systemd-logind[1435]: New session 26 of user core. Sep 10 23:27:28.019139 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 23:27:28.139483 containerd[1447]: time="2025-09-10T23:27:28.138795063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf8fl,Uid:339b947e-36c1-41cb-be17-ee7cecad4c35,Namespace:kube-system,Attempt:0,}" Sep 10 23:27:28.165328 containerd[1447]: time="2025-09-10T23:27:28.165104913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 23:27:28.165328 containerd[1447]: time="2025-09-10T23:27:28.165160874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 23:27:28.165328 containerd[1447]: time="2025-09-10T23:27:28.165172154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:27:28.165328 containerd[1447]: time="2025-09-10T23:27:28.165251795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 23:27:28.180751 systemd[1]: Started cri-containerd-0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a.scope - libcontainer container 0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a. Sep 10 23:27:28.202817 containerd[1447]: time="2025-09-10T23:27:28.202769340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf8fl,Uid:339b947e-36c1-41cb-be17-ee7cecad4c35,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\"" Sep 10 23:27:28.210635 containerd[1447]: time="2025-09-10T23:27:28.210575501Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:27:28.226445 containerd[1447]: time="2025-09-10T23:27:28.226241665Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb\"" Sep 10 23:27:28.227425 containerd[1447]: time="2025-09-10T23:27:28.227388963Z" level=info msg="StartContainer for \"092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb\"" Sep 10 23:27:28.268779 systemd[1]: Started cri-containerd-092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb.scope - libcontainer container 092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb. Sep 10 23:27:28.294097 containerd[1447]: time="2025-09-10T23:27:28.294050121Z" level=info msg="StartContainer for \"092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb\" returns successfully" Sep 10 23:27:28.301489 systemd[1]: cri-containerd-092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb.scope: Deactivated successfully. Sep 10 23:27:28.335491 containerd[1447]: time="2025-09-10T23:27:28.335307763Z" level=info msg="shim disconnected" id=092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb namespace=k8s.io Sep 10 23:27:28.335491 containerd[1447]: time="2025-09-10T23:27:28.335369564Z" level=warning msg="cleaning up after shim disconnected" id=092a3cc8f86615ccaee8ad2ef1b8e5938d338b28b5a476e08dedb2afa44f36fb namespace=k8s.io Sep 10 23:27:28.335491 containerd[1447]: time="2025-09-10T23:27:28.335377564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:28.955006 containerd[1447]: time="2025-09-10T23:27:28.954822929Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:27:28.980057 containerd[1447]: time="2025-09-10T23:27:28.979909080Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815\"" Sep 10 23:27:28.980903 containerd[1447]: time="2025-09-10T23:27:28.980853014Z" level=info msg="StartContainer for \"67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815\"" Sep 10 23:27:29.013802 systemd[1]: Started cri-containerd-67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815.scope - libcontainer container 67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815. Sep 10 23:27:29.057372 systemd[1]: cri-containerd-67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815.scope: Deactivated successfully. Sep 10 23:27:29.060804 containerd[1447]: time="2025-09-10T23:27:29.060727548Z" level=info msg="StartContainer for \"67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815\" returns successfully" Sep 10 23:27:29.078514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815-rootfs.mount: Deactivated successfully. Sep 10 23:27:29.125636 containerd[1447]: time="2025-09-10T23:27:29.125563285Z" level=info msg="shim disconnected" id=67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815 namespace=k8s.io Sep 10 23:27:29.125636 containerd[1447]: time="2025-09-10T23:27:29.125620605Z" level=warning msg="cleaning up after shim disconnected" id=67e1f8d5bd3dee4543677658cb289c4bad8c11fcf29805412f74dc285d46f815 namespace=k8s.io Sep 10 23:27:29.125636 containerd[1447]: time="2025-09-10T23:27:29.125628166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:29.959461 containerd[1447]: time="2025-09-10T23:27:29.958919238Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:27:29.981745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658481453.mount: Deactivated successfully. Sep 10 23:27:29.991826 containerd[1447]: time="2025-09-10T23:27:29.991703892Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f\"" Sep 10 23:27:29.993967 containerd[1447]: time="2025-09-10T23:27:29.992305781Z" level=info msg="StartContainer for \"fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f\"" Sep 10 23:27:30.044772 systemd[1]: Started cri-containerd-fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f.scope - libcontainer container fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f. Sep 10 23:27:30.100109 systemd[1]: cri-containerd-fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f.scope: Deactivated successfully. Sep 10 23:27:30.103363 containerd[1447]: time="2025-09-10T23:27:30.103328243Z" level=info msg="StartContainer for \"fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f\" returns successfully" Sep 10 23:27:30.121171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f-rootfs.mount: Deactivated successfully. Sep 10 23:27:30.125627 containerd[1447]: time="2025-09-10T23:27:30.125567087Z" level=info msg="shim disconnected" id=fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f namespace=k8s.io Sep 10 23:27:30.125627 containerd[1447]: time="2025-09-10T23:27:30.125623288Z" level=warning msg="cleaning up after shim disconnected" id=fb02fd084310a44cd0df99a13f95606fe40f566f83924d438f35a1ccc771407f namespace=k8s.io Sep 10 23:27:30.125627 containerd[1447]: time="2025-09-10T23:27:30.125631928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:30.961162 containerd[1447]: time="2025-09-10T23:27:30.961091903Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:27:30.989991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406884257.mount: Deactivated successfully. Sep 10 23:27:30.999628 containerd[1447]: time="2025-09-10T23:27:30.999584024Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf\"" Sep 10 23:27:31.001235 containerd[1447]: time="2025-09-10T23:27:31.000228153Z" level=info msg="StartContainer for \"2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf\"" Sep 10 23:27:31.019702 systemd[1]: Started cri-containerd-2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf.scope - libcontainer container 2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf. Sep 10 23:27:31.042028 systemd[1]: cri-containerd-2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf.scope: Deactivated successfully. Sep 10 23:27:31.046698 containerd[1447]: time="2025-09-10T23:27:31.046619568Z" level=info msg="StartContainer for \"2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf\" returns successfully" Sep 10 23:27:31.066445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf-rootfs.mount: Deactivated successfully. Sep 10 23:27:31.073595 containerd[1447]: time="2025-09-10T23:27:31.073510827Z" level=info msg="shim disconnected" id=2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf namespace=k8s.io Sep 10 23:27:31.073595 containerd[1447]: time="2025-09-10T23:27:31.073587828Z" level=warning msg="cleaning up after shim disconnected" id=2256868f2564fea7008d18db370df7a0a50e59d2d56e1937ce54e91a5e3007cf namespace=k8s.io Sep 10 23:27:31.073595 containerd[1447]: time="2025-09-10T23:27:31.073597028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:27:31.789102 kubelet[2558]: E0910 23:27:31.789063 2558 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:27:31.971260 containerd[1447]: time="2025-09-10T23:27:31.971116080Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:27:31.987678 containerd[1447]: time="2025-09-10T23:27:31.987629193Z" level=info msg="CreateContainer within sandbox \"0e3b89278e7881890111f6fa0760321c79fbc161698e42a998cf94b71569d90a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c429fca3cb9e90615bcb8a079e17078eceb06bbc8ec5c3e2b82fefc6b4d998df\"" Sep 10 23:27:31.988109 containerd[1447]: time="2025-09-10T23:27:31.988087640Z" level=info msg="StartContainer for \"c429fca3cb9e90615bcb8a079e17078eceb06bbc8ec5c3e2b82fefc6b4d998df\"" Sep 10 23:27:32.024687 systemd[1]: Started cri-containerd-c429fca3cb9e90615bcb8a079e17078eceb06bbc8ec5c3e2b82fefc6b4d998df.scope - libcontainer container c429fca3cb9e90615bcb8a079e17078eceb06bbc8ec5c3e2b82fefc6b4d998df. Sep 10 23:27:32.052258 containerd[1447]: time="2025-09-10T23:27:32.052146879Z" level=info msg="StartContainer for \"c429fca3cb9e90615bcb8a079e17078eceb06bbc8ec5c3e2b82fefc6b4d998df\" returns successfully" Sep 10 23:27:32.308628 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 23:27:32.987686 kubelet[2558]: I0910 23:27:32.987546 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pf8fl" podStartSLOduration=5.9875132749999995 podStartE2EDuration="5.987513275s" podCreationTimestamp="2025-09-10 23:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:27:32.987273511 +0000 UTC m=+81.344919346" watchObservedRunningTime="2025-09-10 23:27:32.987513275 +0000 UTC m=+81.345159110" Sep 10 23:27:33.738935 kubelet[2558]: I0910 23:27:33.738170 2558 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T23:27:33Z","lastTransitionTime":"2025-09-10T23:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 23:27:35.161142 systemd-networkd[1368]: lxc_health: Link UP Sep 10 23:27:35.161417 systemd-networkd[1368]: lxc_health: Gained carrier Sep 10 23:27:36.207045 systemd-networkd[1368]: lxc_health: Gained IPv6LL Sep 10 23:27:40.828783 sshd[4421]: Connection closed by 10.0.0.1 port 33404 Sep 10 23:27:40.829336 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Sep 10 23:27:40.832749 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:33404.service: Deactivated successfully. Sep 10 23:27:40.834679 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 23:27:40.835249 systemd-logind[1435]: Session 26 logged out. Waiting for processes to exit. Sep 10 23:27:40.836102 systemd-logind[1435]: Removed session 26.