Sep 8 23:46:19.842580 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:46:19.842630 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Sep 8 22:15:05 -00 2025 Sep 8 23:46:19.842640 kernel: KASLR enabled Sep 8 23:46:19.842646 kernel: efi: EFI v2.7 by EDK II Sep 8 23:46:19.842659 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 8 23:46:19.842665 kernel: random: crng init done Sep 8 23:46:19.842672 kernel: secureboot: Secure boot disabled Sep 8 23:46:19.842678 kernel: ACPI: Early table checksum verification disabled Sep 8 23:46:19.842684 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 8 23:46:19.842691 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:46:19.842698 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842703 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842709 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842715 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842723 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842730 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842737 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842743 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842750 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:19.842756 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:46:19.842762 kernel: NUMA: Failed to initialise from firmware Sep 8 23:46:19.842782 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:19.842801 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Sep 8 23:46:19.842808 kernel: Zone ranges: Sep 8 23:46:19.842814 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:19.842823 kernel: DMA32 empty Sep 8 23:46:19.842829 kernel: Normal empty Sep 8 23:46:19.842835 kernel: Movable zone start for each node Sep 8 23:46:19.842841 kernel: Early memory node ranges Sep 8 23:46:19.842847 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 8 23:46:19.842854 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 8 23:46:19.842863 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 8 23:46:19.842869 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 8 23:46:19.842875 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 8 23:46:19.842881 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:46:19.842888 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:46:19.842894 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:46:19.842901 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:46:19.842908 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:19.842914 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:46:19.842923 kernel: psci: probing for conduit method from ACPI. Sep 8 23:46:19.842930 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:46:19.842937 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:46:19.842945 kernel: psci: Trusted OS migration not required Sep 8 23:46:19.842951 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:46:19.842958 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:46:19.842965 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 8 23:46:19.842971 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 8 23:46:19.842978 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:46:19.842984 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:46:19.842990 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:46:19.842997 kernel: CPU features: detected: Hardware dirty bit management Sep 8 23:46:19.843003 kernel: CPU features: detected: Spectre-v4 Sep 8 23:46:19.843011 kernel: CPU features: detected: Spectre-BHB Sep 8 23:46:19.843019 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:46:19.843025 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:46:19.843032 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:46:19.843038 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:46:19.843045 kernel: alternatives: applying boot alternatives Sep 8 23:46:19.843052 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:46:19.843059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:46:19.843066 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:46:19.843073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:46:19.843079 kernel: Fallback order for Node 0: 0 Sep 8 23:46:19.843097 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 8 23:46:19.843103 kernel: Policy zone: DMA Sep 8 23:46:19.843110 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:46:19.843117 kernel: software IO TLB: area num 4. Sep 8 23:46:19.843123 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 8 23:46:19.843130 kernel: Memory: 2387416K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184872K reserved, 0K cma-reserved) Sep 8 23:46:19.843137 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:46:19.843144 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:46:19.843151 kernel: rcu: RCU event tracing is enabled. Sep 8 23:46:19.843158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:46:19.843165 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:46:19.843172 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:46:19.843180 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:46:19.843186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:46:19.843193 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:46:19.843199 kernel: GICv3: 256 SPIs implemented Sep 8 23:46:19.843206 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:46:19.843212 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:46:19.843219 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:46:19.843225 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:46:19.843232 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:46:19.843238 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:46:19.843245 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:46:19.843253 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 8 23:46:19.843260 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 8 23:46:19.843266 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:46:19.843273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:19.843279 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:46:19.843286 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:46:19.843293 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:46:19.843299 kernel: arm-pv: using stolen time PV Sep 8 23:46:19.843306 kernel: Console: colour dummy device 80x25 Sep 8 23:46:19.843313 kernel: ACPI: Core revision 20230628 Sep 8 23:46:19.843320 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:46:19.843328 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:46:19.843335 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:46:19.843342 kernel: landlock: Up and running. Sep 8 23:46:19.843348 kernel: SELinux: Initializing. Sep 8 23:46:19.843357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:46:19.843364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:46:19.843373 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:46:19.843382 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:46:19.843391 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:46:19.843400 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:46:19.843407 kernel: Platform MSI: ITS@0x8080000 domain created Sep 8 23:46:19.843413 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 8 23:46:19.843420 kernel: Remapping and enabling EFI services. Sep 8 23:46:19.843427 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:46:19.843433 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:46:19.843440 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:46:19.843447 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 8 23:46:19.843458 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:19.843466 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:46:19.843473 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:46:19.843495 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:46:19.843504 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 8 23:46:19.843511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:19.843518 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:46:19.843525 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:46:19.843532 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:46:19.843539 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 8 23:46:19.843556 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:19.843566 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:46:19.843573 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:46:19.843580 kernel: SMP: Total of 4 processors activated. Sep 8 23:46:19.843605 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:46:19.843614 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:46:19.843621 kernel: CPU features: detected: Common not Private translations Sep 8 23:46:19.843629 kernel: CPU features: detected: CRC32 instructions Sep 8 23:46:19.843638 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:46:19.843646 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:46:19.843659 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:46:19.843666 kernel: CPU features: detected: Privileged Access Never Sep 8 23:46:19.843673 kernel: CPU features: detected: RAS Extension Support Sep 8 23:46:19.843680 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:46:19.843687 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:46:19.843694 kernel: alternatives: applying system-wide alternatives Sep 8 23:46:19.843701 kernel: devtmpfs: initialized Sep 8 23:46:19.843709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:46:19.843717 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:46:19.843724 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:46:19.843731 kernel: SMBIOS 3.0.0 present. Sep 8 23:46:19.843739 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:46:19.843746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:46:19.843753 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:46:19.843760 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:46:19.843767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:46:19.843776 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:46:19.843783 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Sep 8 23:46:19.843790 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:46:19.843797 kernel: cpuidle: using governor menu Sep 8 23:46:19.843805 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:46:19.843812 kernel: ASID allocator initialised with 32768 entries Sep 8 23:46:19.843818 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:46:19.843825 kernel: Serial: AMBA PL011 UART driver Sep 8 23:46:19.843836 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:46:19.843844 kernel: Modules: 0 pages in range for non-PLT usage Sep 8 23:46:19.843852 kernel: Modules: 509248 pages in range for PLT usage Sep 8 23:46:19.843859 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:46:19.843869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:46:19.843877 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:46:19.843885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:46:19.843901 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:46:19.843908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:46:19.843921 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:46:19.843934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:46:19.843950 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:46:19.843958 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:46:19.843972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:46:19.843979 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:46:19.843987 kernel: ACPI: Interpreter enabled Sep 8 23:46:19.843994 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:46:19.844002 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:46:19.844009 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:46:19.844016 kernel: printk: console [ttyAMA0] enabled Sep 8 23:46:19.844025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:46:19.844171 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:46:19.844271 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:46:19.844347 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:46:19.844411 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:46:19.844473 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:46:19.844482 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:46:19.844493 kernel: PCI host bridge to bus 0000:00 Sep 8 23:46:19.844567 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:46:19.844638 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:46:19.844708 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:46:19.844766 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:46:19.844879 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 8 23:46:19.844963 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:46:19.845034 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 8 23:46:19.845098 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 8 23:46:19.845171 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:46:19.845236 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:46:19.845305 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 8 23:46:19.845372 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 8 23:46:19.845433 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:46:19.845491 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:46:19.845549 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:46:19.845558 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:46:19.845565 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:46:19.845572 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:46:19.845579 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:46:19.845605 kernel: iommu: Default domain type: Translated Sep 8 23:46:19.845615 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:46:19.845622 kernel: efivars: Registered efivars operations Sep 8 23:46:19.845629 kernel: vgaarb: loaded Sep 8 23:46:19.845636 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:46:19.845643 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:46:19.845657 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:46:19.845665 kernel: pnp: PnP ACPI init Sep 8 23:46:19.845743 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:46:19.845756 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:46:19.845763 kernel: NET: Registered PF_INET protocol family Sep 8 23:46:19.845771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:46:19.845778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:46:19.845785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:46:19.845793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:46:19.845800 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:46:19.845807 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:46:19.845815 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:46:19.845824 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:46:19.845831 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:46:19.845838 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:46:19.845845 kernel: kvm [1]: HYP mode not available Sep 8 23:46:19.845852 kernel: Initialise system trusted keyrings Sep 8 23:46:19.845859 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:46:19.845866 kernel: Key type asymmetric registered Sep 8 23:46:19.845873 kernel: Asymmetric key parser 'x509' registered Sep 8 23:46:19.845880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 8 23:46:19.845889 kernel: io scheduler mq-deadline registered Sep 8 23:46:19.845896 kernel: io scheduler kyber registered Sep 8 23:46:19.845903 kernel: io scheduler bfq registered Sep 8 23:46:19.845911 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:46:19.845918 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:46:19.845926 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:46:19.845994 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:46:19.846004 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:46:19.846011 kernel: thunder_xcv, ver 1.0 Sep 8 23:46:19.846018 kernel: thunder_bgx, ver 1.0 Sep 8 23:46:19.846027 kernel: nicpf, ver 1.0 Sep 8 23:46:19.846034 kernel: nicvf, ver 1.0 Sep 8 23:46:19.846108 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:46:19.846170 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:46:19 UTC (1757375179) Sep 8 23:46:19.846180 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:46:19.846187 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 8 23:46:19.846195 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 8 23:46:19.846204 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:46:19.846211 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:46:19.846218 kernel: Segment Routing with IPv6 Sep 8 23:46:19.846225 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:46:19.846232 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:46:19.846239 kernel: Key type dns_resolver registered Sep 8 23:46:19.846246 kernel: registered taskstats version 1 Sep 8 23:46:19.846253 kernel: Loading compiled-in X.509 certificates Sep 8 23:46:19.846260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 98feb45e0c7a714eab78dfe8a165eb91758e42e9' Sep 8 23:46:19.846268 kernel: Key type .fscrypt registered Sep 8 23:46:19.846277 kernel: Key type fscrypt-provisioning registered Sep 8 23:46:19.846284 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:46:19.846291 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:46:19.846299 kernel: ima: No architecture policies found Sep 8 23:46:19.846306 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:46:19.846313 kernel: clk: Disabling unused clocks Sep 8 23:46:19.846320 kernel: Freeing unused kernel memory: 38400K Sep 8 23:46:19.846327 kernel: Run /init as init process Sep 8 23:46:19.846336 kernel: with arguments: Sep 8 23:46:19.846343 kernel: /init Sep 8 23:46:19.846350 kernel: with environment: Sep 8 23:46:19.846357 kernel: HOME=/ Sep 8 23:46:19.846364 kernel: TERM=linux Sep 8 23:46:19.846370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:46:19.846389 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:46:19.846400 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:46:19.846410 systemd[1]: Detected virtualization kvm. Sep 8 23:46:19.846418 systemd[1]: Detected architecture arm64. Sep 8 23:46:19.846426 systemd[1]: Running in initrd. Sep 8 23:46:19.846442 systemd[1]: No hostname configured, using default hostname. Sep 8 23:46:19.846451 systemd[1]: Hostname set to . Sep 8 23:46:19.846459 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:46:19.846467 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:46:19.846474 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:19.846484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:19.846492 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:46:19.846500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:46:19.846508 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:46:19.846516 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:46:19.846525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:46:19.846534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:46:19.846543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:19.846551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:19.846558 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:46:19.846566 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:46:19.846574 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:46:19.846582 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:46:19.846598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:46:19.846606 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:46:19.846618 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:46:19.846629 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:46:19.846637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:19.846645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:19.846659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:19.846743 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:46:19.846753 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:46:19.846761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:46:19.846769 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:46:19.846780 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:46:19.846788 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:46:19.846796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:46:19.846803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:19.846811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:46:19.846821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:19.846831 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:46:19.846839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:46:19.846871 systemd-journald[238]: Collecting audit messages is disabled. Sep 8 23:46:19.846892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:19.846902 systemd-journald[238]: Journal started Sep 8 23:46:19.846920 systemd-journald[238]: Runtime Journal (/run/log/journal/8ed82f32b0114b28b58d87d3ce92febb) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:46:19.846960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:19.841838 systemd-modules-load[240]: Inserted module 'overlay' Sep 8 23:46:19.851626 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:46:19.852103 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:46:19.858610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:46:19.858659 kernel: Bridge firewalling registered Sep 8 23:46:19.855980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:46:19.858822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:46:19.859710 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 8 23:46:19.862040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:19.866546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:46:19.870082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:19.874665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:19.876301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:19.879780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:46:19.881535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:19.884297 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:46:19.893955 dracut-cmdline[275]: dracut-dracut-053 Sep 8 23:46:19.896535 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:46:19.912508 systemd-resolved[278]: Positive Trust Anchors: Sep 8 23:46:19.912526 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:46:19.912556 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:46:19.917420 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 8 23:46:19.918563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:46:19.922052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:19.973630 kernel: SCSI subsystem initialized Sep 8 23:46:19.981705 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:46:19.989660 kernel: iscsi: registered transport (tcp) Sep 8 23:46:20.003626 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:46:20.003695 kernel: QLogic iSCSI HBA Driver Sep 8 23:46:20.049064 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:46:20.059795 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:46:20.074718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:46:20.075668 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:46:20.075681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:46:20.120616 kernel: raid6: neonx8 gen() 15808 MB/s Sep 8 23:46:20.137602 kernel: raid6: neonx4 gen() 15804 MB/s Sep 8 23:46:20.154604 kernel: raid6: neonx2 gen() 13208 MB/s Sep 8 23:46:20.171606 kernel: raid6: neonx1 gen() 10407 MB/s Sep 8 23:46:20.188630 kernel: raid6: int64x8 gen() 6780 MB/s Sep 8 23:46:20.205617 kernel: raid6: int64x4 gen() 7327 MB/s Sep 8 23:46:20.222636 kernel: raid6: int64x2 gen() 6145 MB/s Sep 8 23:46:20.239636 kernel: raid6: int64x1 gen() 5053 MB/s Sep 8 23:46:20.239710 kernel: raid6: using algorithm neonx8 gen() 15808 MB/s Sep 8 23:46:20.256614 kernel: raid6: .... xor() 11868 MB/s, rmw enabled Sep 8 23:46:20.256686 kernel: raid6: using neon recovery algorithm Sep 8 23:46:20.261623 kernel: xor: measuring software checksum speed Sep 8 23:46:20.261671 kernel: 8regs : 21641 MB/sec Sep 8 23:46:20.262619 kernel: 32regs : 19464 MB/sec Sep 8 23:46:20.262666 kernel: arm64_neon : 28041 MB/sec Sep 8 23:46:20.262679 kernel: xor: using function: arm64_neon (28041 MB/sec) Sep 8 23:46:20.310625 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:46:20.321216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:46:20.332832 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:20.347124 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 8 23:46:20.350948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:20.358789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:46:20.370417 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 8 23:46:20.402214 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:46:20.417801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:46:20.460637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:20.470820 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:46:20.483793 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:46:20.485175 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:46:20.486842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:20.488925 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:46:20.500012 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:46:20.509244 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:20.522936 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:46:20.528837 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:46:20.531609 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:46:20.531642 kernel: GPT:9289727 != 19775487 Sep 8 23:46:20.532837 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:46:20.532873 kernel: GPT:9289727 != 19775487 Sep 8 23:46:20.534633 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:46:20.534770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:20.545846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:46:20.545966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:20.548745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:20.549756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:46:20.550003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:20.553351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:20.568899 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (505) Sep 8 23:46:20.567914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:20.573899 kernel: BTRFS: device fsid 75950a77-34ea-4c25-8b07-0ac9de89ed80 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (521) Sep 8 23:46:20.580297 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:46:20.581856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:20.600135 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:46:20.611705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:46:20.617526 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:46:20.618554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:46:20.634758 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:46:20.636382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:20.641611 disk-uuid[551]: Primary Header is updated. Sep 8 23:46:20.641611 disk-uuid[551]: Secondary Entries is updated. Sep 8 23:46:20.641611 disk-uuid[551]: Secondary Header is updated. Sep 8 23:46:20.645613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:20.660954 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:21.659796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:21.659850 disk-uuid[552]: The operation has completed successfully. Sep 8 23:46:21.687356 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:46:21.687486 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:46:21.723789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:46:21.727838 sh[574]: Success Sep 8 23:46:21.739632 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 8 23:46:21.795422 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:46:21.797127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:46:21.798636 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:46:21.809265 kernel: BTRFS info (device dm-0): first mount of filesystem 75950a77-34ea-4c25-8b07-0ac9de89ed80 Sep 8 23:46:21.809297 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:21.810185 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:46:21.810199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:46:21.810778 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:46:21.814820 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:46:21.816002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:46:21.825778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:46:21.827227 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:46:21.840921 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:21.840974 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:21.840985 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:21.844648 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:21.848626 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:21.851235 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:46:21.856842 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:46:21.923556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:21.935800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:21.949019 ignition[659]: Ignition 2.20.0 Sep 8 23:46:21.949031 ignition[659]: Stage: fetch-offline Sep 8 23:46:21.949085 ignition[659]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:21.949094 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:21.949257 ignition[659]: parsed url from cmdline: "" Sep 8 23:46:21.949260 ignition[659]: no config URL provided Sep 8 23:46:21.949265 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:46:21.949272 ignition[659]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:46:21.949297 ignition[659]: op(1): [started] loading QEMU firmware config module Sep 8 23:46:21.949302 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:46:21.960663 ignition[659]: op(1): [finished] loading QEMU firmware config module Sep 8 23:46:21.960687 ignition[659]: QEMU firmware config was not found. Ignoring... Sep 8 23:46:21.962779 systemd-networkd[763]: lo: Link UP Sep 8 23:46:21.962784 systemd-networkd[763]: lo: Gained carrier Sep 8 23:46:21.963571 systemd-networkd[763]: Enumeration completed Sep 8 23:46:21.964020 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:21.964024 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:21.964769 systemd-networkd[763]: eth0: Link UP Sep 8 23:46:21.964772 systemd-networkd[763]: eth0: Gained carrier Sep 8 23:46:21.964780 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:21.965253 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:21.966449 systemd[1]: Reached target network.target - Network. Sep 8 23:46:21.997644 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:22.016228 ignition[659]: parsing config with SHA512: f80721633f0e1afc911d173f738864d88914f2e86164332b5bdbac48751b2143cd130d611a0b662b41629703084b4c19766093f8509d4bed6c046550b3a65a53 Sep 8 23:46:22.022296 unknown[659]: fetched base config from "system" Sep 8 23:46:22.022310 unknown[659]: fetched user config from "qemu" Sep 8 23:46:22.023310 ignition[659]: fetch-offline: fetch-offline passed Sep 8 23:46:22.024993 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:22.023420 ignition[659]: Ignition finished successfully Sep 8 23:46:22.026845 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:46:22.031775 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:46:22.044970 ignition[769]: Ignition 2.20.0 Sep 8 23:46:22.044980 ignition[769]: Stage: kargs Sep 8 23:46:22.045145 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:22.045154 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:22.046040 ignition[769]: kargs: kargs passed Sep 8 23:46:22.046085 ignition[769]: Ignition finished successfully Sep 8 23:46:22.049426 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:46:22.062833 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:46:22.072974 ignition[777]: Ignition 2.20.0 Sep 8 23:46:22.072983 ignition[777]: Stage: disks Sep 8 23:46:22.073154 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:22.073165 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:22.075519 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:46:22.074119 ignition[777]: disks: disks passed Sep 8 23:46:22.077148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:22.074164 ignition[777]: Ignition finished successfully Sep 8 23:46:22.078745 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:46:22.080309 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:22.082035 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:22.083637 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:22.094790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:46:22.106963 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:46:22.110699 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:46:22.118796 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:46:22.160609 kernel: EXT4-fs (vda9): mounted filesystem 3b93848a-00fd-42cd-b996-7bf357d8ae77 r/w with ordered data mode. Quota mode: none. Sep 8 23:46:22.161409 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:46:22.162850 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:22.177783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:22.179637 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:46:22.181188 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:46:22.181235 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:46:22.181264 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:22.190014 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (794) Sep 8 23:46:22.190049 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:22.185682 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:46:22.195906 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:22.195938 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:22.195955 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:22.190079 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:46:22.196306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:22.226947 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:46:22.230240 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:46:22.233544 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:46:22.238097 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:46:22.310865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:22.321746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:46:22.324249 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:46:22.329606 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:22.344610 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:46:22.348256 ignition[909]: INFO : Ignition 2.20.0 Sep 8 23:46:22.348256 ignition[909]: INFO : Stage: mount Sep 8 23:46:22.350492 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:22.350492 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:22.350492 ignition[909]: INFO : mount: mount passed Sep 8 23:46:22.350492 ignition[909]: INFO : Ignition finished successfully Sep 8 23:46:22.351676 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:46:22.357718 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:46:22.938039 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:46:22.950816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:22.957462 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (922) Sep 8 23:46:22.957497 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:22.957508 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:22.958789 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:22.960600 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:22.961888 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:22.977780 ignition[939]: INFO : Ignition 2.20.0 Sep 8 23:46:22.977780 ignition[939]: INFO : Stage: files Sep 8 23:46:22.979091 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:22.979091 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:22.979091 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:46:22.982128 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:46:22.982128 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:46:22.982128 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:46:22.985608 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:46:22.985608 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:46:22.985608 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:46:22.985608 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 8 23:46:22.982528 unknown[939]: wrote ssh authorized keys file for user: core Sep 8 23:46:23.026158 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:46:23.316813 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:46:23.316813 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:46:23.320762 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 8 23:46:23.544572 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:46:23.802749 systemd-networkd[763]: eth0: Gained IPv6LL Sep 8 23:46:23.809473 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:46:23.809473 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:46:23.813804 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 8 23:46:24.081923 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:46:24.790978 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:46:24.790978 ignition[939]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:46:24.794524 ignition[939]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:24.812743 ignition[939]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:24.816806 ignition[939]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:24.816806 ignition[939]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:24.816806 ignition[939]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:46:24.816806 ignition[939]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:46:24.816806 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:24.826581 ignition[939]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:24.826581 ignition[939]: INFO : files: files passed Sep 8 23:46:24.826581 ignition[939]: INFO : Ignition finished successfully Sep 8 23:46:24.821502 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:46:24.832795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:46:24.835341 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:46:24.838606 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:46:24.838713 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:46:24.842705 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:46:24.845621 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:24.845621 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:24.848948 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:24.848284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:24.850313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:46:24.862809 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:46:24.881917 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:46:24.882043 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:46:24.884025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:46:24.886032 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:46:24.887618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:46:24.888475 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:46:24.904317 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:24.914800 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:46:24.922556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:24.923517 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:24.925855 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:46:24.927367 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:46:24.927492 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:24.930027 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:46:24.931928 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:46:24.933512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:46:24.935189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:24.937102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:24.939049 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:46:24.940843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:46:24.942670 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:46:24.944547 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:46:24.946324 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:46:24.947903 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:46:24.948034 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:24.950473 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:24.952545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:24.954563 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:46:24.955675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:24.956748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:46:24.956879 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:46:24.959723 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:46:24.959844 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:24.961776 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:46:24.963327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:46:24.966656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:24.968624 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:46:24.970645 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:46:24.972259 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:46:24.972351 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:46:24.973820 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:46:24.973899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:46:24.975423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:46:24.975541 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:24.977228 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:46:24.977330 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:46:24.990845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:46:24.991549 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:46:24.991705 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:24.995980 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:46:24.998470 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:46:24.999773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:25.001069 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:46:25.001174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:46:25.006932 ignition[995]: INFO : Ignition 2.20.0 Sep 8 23:46:25.006932 ignition[995]: INFO : Stage: umount Sep 8 23:46:25.006932 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:25.006932 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:25.006932 ignition[995]: INFO : umount: umount passed Sep 8 23:46:25.006932 ignition[995]: INFO : Ignition finished successfully Sep 8 23:46:25.006165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:46:25.007616 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:46:25.009921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:46:25.010389 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:46:25.010471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:46:25.013821 systemd[1]: Stopped target network.target - Network. Sep 8 23:46:25.015232 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:46:25.015323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:46:25.016757 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:46:25.016804 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:46:25.018208 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:46:25.018247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:46:25.021866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:46:25.021909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:46:25.026179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:46:25.027575 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:46:25.035112 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:46:25.035255 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:46:25.038445 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:46:25.038706 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:46:25.038930 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:46:25.042211 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:46:25.042777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:46:25.042838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:25.056723 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:46:25.057504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:46:25.057568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:25.059448 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:46:25.059496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:25.063474 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:46:25.063526 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:25.064580 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:46:25.064637 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:25.067813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:25.071436 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:46:25.071888 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:46:25.077948 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:46:25.078052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:46:25.079910 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:46:25.079969 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:25.081257 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:46:25.082679 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:46:25.087285 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:46:25.087425 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:25.088860 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:46:25.088901 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:25.090510 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:46:25.090543 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:25.092529 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:46:25.092577 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:46:25.095438 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:46:25.095487 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:46:25.098142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:46:25.098184 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:25.111765 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:46:25.112581 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:46:25.112673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:25.115659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:46:25.115707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:25.119269 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:46:25.119327 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:46:25.120363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:46:25.120478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:46:25.122426 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:46:25.125177 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:46:25.134355 systemd[1]: Switching root. Sep 8 23:46:25.168844 systemd-journald[238]: Journal stopped Sep 8 23:46:25.911932 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 8 23:46:25.911994 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:46:25.912012 kernel: SELinux: policy capability open_perms=1 Sep 8 23:46:25.912026 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:46:25.912039 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:46:25.912048 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:46:25.912058 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:46:25.912068 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:46:25.912078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:46:25.912087 kernel: audit: type=1403 audit(1757375185.337:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:46:25.912102 systemd[1]: Successfully loaded SELinux policy in 32.265ms. Sep 8 23:46:25.912114 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.614ms. Sep 8 23:46:25.912127 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:46:25.912138 systemd[1]: Detected virtualization kvm. Sep 8 23:46:25.912148 systemd[1]: Detected architecture arm64. Sep 8 23:46:25.912159 systemd[1]: Detected first boot. Sep 8 23:46:25.912169 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:46:25.912179 zram_generator::config[1042]: No configuration found. Sep 8 23:46:25.912190 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:46:25.912200 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:46:25.912211 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:46:25.912223 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:46:25.912235 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:46:25.912245 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:46:25.912256 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:46:25.912266 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:46:25.912277 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:46:25.912287 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:46:25.912298 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:46:25.912310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:46:25.912320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:46:25.912330 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:46:25.912340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:25.912351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:25.912362 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:46:25.912372 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:46:25.912383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:46:25.912393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:46:25.912405 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:46:25.912416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:25.912427 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:46:25.912437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:46:25.912448 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:25.912459 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:46:25.912470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:25.912480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:46:25.912492 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:46:25.912502 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:46:25.912513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:46:25.912523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:46:25.912536 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:46:25.912546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:25.912558 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:25.912568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:25.912578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:46:25.912670 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:46:25.912684 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:46:25.912695 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:46:25.912705 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:46:25.912716 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:46:25.912726 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:46:25.912737 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:46:25.912747 systemd[1]: Reached target machines.target - Containers. Sep 8 23:46:25.912758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:46:25.912770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:25.912781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:46:25.912791 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:46:25.912801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:25.912811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:25.912822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:25.912832 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:46:25.912842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:25.912853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:46:25.912864 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:46:25.912875 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:46:25.912885 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:46:25.912895 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:46:25.912906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:25.912916 kernel: fuse: init (API version 7.39) Sep 8 23:46:25.912926 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:46:25.912936 kernel: loop: module loaded Sep 8 23:46:25.912947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:46:25.912958 kernel: ACPI: bus type drm_connector registered Sep 8 23:46:25.912967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:46:25.912977 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:46:25.912989 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:46:25.912999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:46:25.913010 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:46:25.913020 systemd[1]: Stopped verity-setup.service. Sep 8 23:46:25.913032 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:46:25.913043 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:46:25.913053 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:46:25.913063 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:46:25.913097 systemd-journald[1110]: Collecting audit messages is disabled. Sep 8 23:46:25.913121 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:46:25.913132 systemd-journald[1110]: Journal started Sep 8 23:46:25.913153 systemd-journald[1110]: Runtime Journal (/run/log/journal/8ed82f32b0114b28b58d87d3ce92febb) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:46:25.713993 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:46:25.723760 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:46:25.724158 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:46:25.915097 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:46:25.915761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:46:25.918624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:46:25.919838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:25.921096 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:46:25.921982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:46:25.924279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:25.924444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:25.927795 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:25.927986 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:25.929253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:25.929421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:25.930753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:46:25.930911 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:46:25.932131 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:25.932296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:25.933489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:25.934877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:46:25.936503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:46:25.937923 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:46:25.951274 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:46:25.956740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:46:25.958741 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:46:25.959659 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:46:25.959699 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:25.961453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:46:25.963811 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:46:25.965776 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:46:25.966701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:25.968027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:46:25.969766 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:46:25.970767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:25.974800 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:46:25.976617 systemd-journald[1110]: Time spent on flushing to /var/log/journal/8ed82f32b0114b28b58d87d3ce92febb is 24.592ms for 868 entries. Sep 8 23:46:25.976617 systemd-journald[1110]: System Journal (/var/log/journal/8ed82f32b0114b28b58d87d3ce92febb) is 8M, max 195.6M, 187.6M free. Sep 8 23:46:26.019825 systemd-journald[1110]: Received client request to flush runtime journal. Sep 8 23:46:26.019880 kernel: loop0: detected capacity change from 0 to 113512 Sep 8 23:46:25.976754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:25.977716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:46:25.982830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:46:25.985854 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:46:25.990682 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:25.992038 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:46:25.993852 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:46:25.995206 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:46:25.997624 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:46:26.002514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:46:26.015816 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:46:26.022076 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:46:26.024546 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:46:26.027624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:26.028614 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:46:26.029455 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:46:26.044884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:46:26.049170 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:46:26.053536 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:46:26.064866 kernel: loop1: detected capacity change from 0 to 123192 Sep 8 23:46:26.065847 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 8 23:46:26.065866 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Sep 8 23:46:26.070853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:26.105618 kernel: loop2: detected capacity change from 0 to 211168 Sep 8 23:46:26.139619 kernel: loop3: detected capacity change from 0 to 113512 Sep 8 23:46:26.145614 kernel: loop4: detected capacity change from 0 to 123192 Sep 8 23:46:26.151618 kernel: loop5: detected capacity change from 0 to 211168 Sep 8 23:46:26.157967 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:46:26.158384 (sd-merge)[1183]: Merged extensions into '/usr'. Sep 8 23:46:26.161711 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:46:26.161726 systemd[1]: Reloading... Sep 8 23:46:26.232772 zram_generator::config[1210]: No configuration found. Sep 8 23:46:26.258733 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:46:26.325582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:26.376429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:46:26.376746 systemd[1]: Reloading finished in 214 ms. Sep 8 23:46:26.395506 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:46:26.396908 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:46:26.416126 systemd[1]: Starting ensure-sysext.service... Sep 8 23:46:26.417917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:46:26.427835 systemd[1]: Reload requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:46:26.427853 systemd[1]: Reloading... Sep 8 23:46:26.436222 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:46:26.436446 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:46:26.437166 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:46:26.437410 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 8 23:46:26.437465 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 8 23:46:26.441063 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:26.441072 systemd-tmpfiles[1246]: Skipping /boot Sep 8 23:46:26.450169 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:26.450175 systemd-tmpfiles[1246]: Skipping /boot Sep 8 23:46:26.481721 zram_generator::config[1274]: No configuration found. Sep 8 23:46:26.566058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:26.617461 systemd[1]: Reloading finished in 189 ms. Sep 8 23:46:26.631365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:46:26.656392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:26.664476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:26.667674 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:46:26.669799 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:46:26.675963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:46:26.687197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:26.689905 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:46:26.694997 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:46:26.697499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:26.705775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:26.709208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:26.713392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:26.714686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:26.715122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:26.717775 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:46:26.719412 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Sep 8 23:46:26.721132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:46:26.724131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:26.725770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:26.727021 augenrules[1342]: No rules Sep 8 23:46:26.728754 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:26.728923 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:26.730092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:26.730230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:26.733084 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:26.733217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:26.734975 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:46:26.758020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:46:26.759322 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:26.775628 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:46:26.787639 systemd[1]: Finished ensure-sysext.service. Sep 8 23:46:26.799972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:26.800772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:26.802562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:26.807771 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:26.812475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:26.815600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:26.816762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:26.816809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:26.823225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:26.827353 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:46:26.831425 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:46:26.831853 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:46:26.834575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:26.834766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:26.836226 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:26.838650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:26.842157 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:26.842350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:26.844469 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:46:26.849673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1359) Sep 8 23:46:26.856144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:26.877743 augenrules[1374]: /sbin/augenrules: No change Sep 8 23:46:26.883322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:26.883521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:26.893758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:26.898184 augenrules[1414]: No rules Sep 8 23:46:26.900701 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:26.900901 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:26.908525 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:46:26.910824 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:46:26.915723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:46:26.924835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:46:26.935532 systemd-networkd[1381]: lo: Link UP Sep 8 23:46:26.935540 systemd-networkd[1381]: lo: Gained carrier Sep 8 23:46:26.936522 systemd-networkd[1381]: Enumeration completed Sep 8 23:46:26.936975 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:26.937000 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:26.937004 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:26.937453 systemd-networkd[1381]: eth0: Link UP Sep 8 23:46:26.937460 systemd-networkd[1381]: eth0: Gained carrier Sep 8 23:46:26.937473 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:26.949861 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:46:26.952471 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:46:26.952733 systemd-resolved[1314]: Positive Trust Anchors: Sep 8 23:46:26.952760 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:46:26.952791 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:46:26.954657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:46:26.955767 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:26.956336 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Sep 8 23:46:26.958978 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:46:26.959040 systemd-timesyncd[1383]: Initial clock synchronization to Mon 2025-09-08 23:46:26.866645 UTC. Sep 8 23:46:26.960402 systemd-resolved[1314]: Defaulting to hostname 'linux'. Sep 8 23:46:26.962603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:46:26.965665 systemd[1]: Reached target network.target - Network. Sep 8 23:46:26.966803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:26.980837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:26.982069 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:46:26.983537 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:46:26.989697 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:46:27.003673 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:46:27.016609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:27.037233 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:46:27.038509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:27.039541 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:27.040517 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:46:27.041565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:46:27.042658 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:46:27.043559 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:46:27.044494 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:46:27.045511 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:46:27.045542 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:46:27.046463 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:46:27.048089 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:46:27.050317 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:46:27.053522 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:46:27.054809 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:46:27.055830 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:46:27.063576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:46:27.064867 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:46:27.067011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:46:27.068469 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:46:27.069468 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:46:27.070313 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:27.071096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:27.071127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:27.072129 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:46:27.075222 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:46:27.075398 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:46:27.079285 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:46:27.082782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:46:27.084020 jq[1444]: false Sep 8 23:46:27.084100 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:46:27.085154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:46:27.086977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:46:27.090040 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:46:27.092817 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:46:27.100303 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:46:27.100426 extend-filesystems[1445]: Found loop3 Sep 8 23:46:27.100426 extend-filesystems[1445]: Found loop4 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found loop5 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda1 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda2 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda3 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found usr Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda4 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda6 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda7 Sep 8 23:46:27.106300 extend-filesystems[1445]: Found vda9 Sep 8 23:46:27.106300 extend-filesystems[1445]: Checking size of /dev/vda9 Sep 8 23:46:27.102672 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:46:27.115738 dbus-daemon[1443]: [system] SELinux support is enabled Sep 8 23:46:27.103143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:46:27.103762 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:46:27.107746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:46:27.111617 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:46:27.115791 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:46:27.119748 jq[1460]: true Sep 8 23:46:27.116601 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:46:27.116756 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:46:27.122133 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:46:27.122310 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:46:27.124418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:46:27.124999 extend-filesystems[1445]: Resized partition /dev/vda9 Sep 8 23:46:27.125709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:46:27.131671 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:46:27.133727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:46:27.133787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:46:27.136563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:46:27.136779 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:46:27.145687 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:46:27.146309 jq[1469]: true Sep 8 23:46:27.149384 update_engine[1457]: I20250908 23:46:27.149178 1457 main.cc:92] Flatcar Update Engine starting Sep 8 23:46:27.152365 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:46:27.154708 update_engine[1457]: I20250908 23:46:27.154665 1457 update_check_scheduler.cc:74] Next update check in 9m3s Sep 8 23:46:27.154821 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:46:27.157015 tar[1467]: linux-arm64/LICENSE Sep 8 23:46:27.157233 tar[1467]: linux-arm64/helm Sep 8 23:46:27.164387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:46:27.175649 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1355) Sep 8 23:46:27.181958 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:46:27.195114 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:46:27.195114 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:46:27.195114 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:46:27.198983 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Sep 8 23:46:27.197917 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:46:27.199660 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:46:27.213339 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:46:27.214369 systemd-logind[1456]: New seat seat0. Sep 8 23:46:27.215319 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:46:27.227201 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:46:27.227924 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:46:27.229208 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:46:27.231880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:46:27.306044 containerd[1478]: time="2025-09-08T23:46:27.305950660Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:46:27.332489 containerd[1478]: time="2025-09-08T23:46:27.332387746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.333757094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.333788952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.333806690Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.333960528Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.333977391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334043492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334056617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334244698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334259414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334271465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:27.334885 containerd[1478]: time="2025-09-08T23:46:27.334280851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334343333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334530659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334665008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334677974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334777125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:46:27.335111 containerd[1478]: time="2025-09-08T23:46:27.334823937Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:46:27.338758 containerd[1478]: time="2025-09-08T23:46:27.338731809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:46:27.338874 containerd[1478]: time="2025-09-08T23:46:27.338859357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:46:27.339009 containerd[1478]: time="2025-09-08T23:46:27.338992673Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:46:27.339120 containerd[1478]: time="2025-09-08T23:46:27.339104074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:46:27.339233 containerd[1478]: time="2025-09-08T23:46:27.339218021Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:46:27.339539 containerd[1478]: time="2025-09-08T23:46:27.339519055Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:46:27.340075 containerd[1478]: time="2025-09-08T23:46:27.340052953Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:46:27.340409 containerd[1478]: time="2025-09-08T23:46:27.340350407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:46:27.340551 containerd[1478]: time="2025-09-08T23:46:27.340472070Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:46:27.340551 containerd[1478]: time="2025-09-08T23:46:27.340494223Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340746377Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340769922Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340781973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340794501Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340808700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340821188Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340833995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340845051Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340865414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340878181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340889993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340901806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340913260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342340 containerd[1478]: time="2025-09-08T23:46:27.340938794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.340952555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.340968066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.340981429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.340996264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341007321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341018974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341033332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341055047Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341078950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341092433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341102932Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341275224Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341297815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:46:27.342618 containerd[1478]: time="2025-09-08T23:46:27.341307678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:46:27.342877 containerd[1478]: time="2025-09-08T23:46:27.341318576Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:46:27.342877 containerd[1478]: time="2025-09-08T23:46:27.341329354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342877 containerd[1478]: time="2025-09-08T23:46:27.341345740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:46:27.342877 containerd[1478]: time="2025-09-08T23:46:27.341355365Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:46:27.342877 containerd[1478]: time="2025-09-08T23:46:27.341365666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:46:27.342989 containerd[1478]: time="2025-09-08T23:46:27.341710409Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:46:27.342989 containerd[1478]: time="2025-09-08T23:46:27.341759885Z" level=info msg="Connect containerd service" Sep 8 23:46:27.342989 containerd[1478]: time="2025-09-08T23:46:27.341793493Z" level=info msg="using legacy CRI server" Sep 8 23:46:27.342989 containerd[1478]: time="2025-09-08T23:46:27.341800214Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:46:27.342989 containerd[1478]: time="2025-09-08T23:46:27.342030295Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:46:27.347331 containerd[1478]: time="2025-09-08T23:46:27.347299043Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347698711Z" level=info msg="Start subscribing containerd event" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347753318Z" level=info msg="Start recovering state" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347814169Z" level=info msg="Start event monitor" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347824749Z" level=info msg="Start snapshots syncer" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347832624Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347839385Z" level=info msg="Start streaming server" Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.347970314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.348020665Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:46:27.348141 containerd[1478]: time="2025-09-08T23:46:27.348116118Z" level=info msg="containerd successfully booted in 0.043693s" Sep 8 23:46:27.348201 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:46:27.357403 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:46:27.376315 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:46:27.389449 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:46:27.393863 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:46:27.395660 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:46:27.398406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:46:27.409891 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:46:27.418872 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:46:27.420885 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:46:27.421888 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:46:27.562950 tar[1467]: linux-arm64/README.md Sep 8 23:46:27.575760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:46:28.410771 systemd-networkd[1381]: eth0: Gained IPv6LL Sep 8 23:46:28.413566 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:46:28.415307 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:46:28.427103 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:46:28.429651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:28.432363 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:46:28.455713 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:46:28.455958 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:46:28.457641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:46:28.460869 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:46:28.992008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:28.993286 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:46:28.994405 systemd[1]: Startup finished in 540ms (kernel) + 5.664s (initrd) + 3.689s (userspace) = 9.894s. Sep 8 23:46:28.995373 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:46:29.357098 kubelet[1557]: E0908 23:46:29.356972 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:46:29.360303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:46:29.360444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:46:29.360782 systemd[1]: kubelet.service: Consumed 756ms CPU time, 259.5M memory peak. Sep 8 23:46:32.309115 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:46:32.310328 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:35634.service - OpenSSH per-connection server daemon (10.0.0.1:35634). Sep 8 23:46:32.392983 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 35634 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:32.396132 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:32.405692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:46:32.414881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:46:32.423047 systemd-logind[1456]: New session 1 of user core. Sep 8 23:46:32.426564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:46:32.429298 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:46:32.435798 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:46:32.441598 systemd-logind[1456]: New session c1 of user core. Sep 8 23:46:32.547734 systemd[1574]: Queued start job for default target default.target. Sep 8 23:46:32.562677 systemd[1574]: Created slice app.slice - User Application Slice. Sep 8 23:46:32.562709 systemd[1574]: Reached target paths.target - Paths. Sep 8 23:46:32.562748 systemd[1574]: Reached target timers.target - Timers. Sep 8 23:46:32.564083 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:46:32.577818 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:46:32.577952 systemd[1574]: Reached target sockets.target - Sockets. Sep 8 23:46:32.577995 systemd[1574]: Reached target basic.target - Basic System. Sep 8 23:46:32.578026 systemd[1574]: Reached target default.target - Main User Target. Sep 8 23:46:32.578051 systemd[1574]: Startup finished in 129ms. Sep 8 23:46:32.578215 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:46:32.588779 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:46:32.649755 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:35644.service - OpenSSH per-connection server daemon (10.0.0.1:35644). Sep 8 23:46:32.692671 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 35644 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:32.694053 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:32.698680 systemd-logind[1456]: New session 2 of user core. Sep 8 23:46:32.714802 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:46:32.768435 sshd[1587]: Connection closed by 10.0.0.1 port 35644 Sep 8 23:46:32.769636 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:32.784047 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). Sep 8 23:46:32.784498 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:35644.service: Deactivated successfully. Sep 8 23:46:32.786155 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:46:32.789356 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:46:32.790499 systemd-logind[1456]: Removed session 2. Sep 8 23:46:32.826179 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:32.827676 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:32.832199 systemd-logind[1456]: New session 3 of user core. Sep 8 23:46:32.851796 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:46:32.902419 sshd[1595]: Connection closed by 10.0.0.1 port 35654 Sep 8 23:46:32.902271 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:32.912851 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:35654.service: Deactivated successfully. Sep 8 23:46:32.914354 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:46:32.915027 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:46:32.922922 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:35666.service - OpenSSH per-connection server daemon (10.0.0.1:35666). Sep 8 23:46:32.924136 systemd-logind[1456]: Removed session 3. Sep 8 23:46:32.965204 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 35666 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:32.966524 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:32.971493 systemd-logind[1456]: New session 4 of user core. Sep 8 23:46:32.986811 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:46:33.041684 sshd[1603]: Connection closed by 10.0.0.1 port 35666 Sep 8 23:46:33.042134 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:33.061903 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:35666.service: Deactivated successfully. Sep 8 23:46:33.065139 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:46:33.066470 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:46:33.076923 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:35680.service - OpenSSH per-connection server daemon (10.0.0.1:35680). Sep 8 23:46:33.078149 systemd-logind[1456]: Removed session 4. Sep 8 23:46:33.117012 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 35680 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:33.118339 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:33.123375 systemd-logind[1456]: New session 5 of user core. Sep 8 23:46:33.129835 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:46:33.191308 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:46:33.191637 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:33.213729 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:33.215249 sshd[1611]: Connection closed by 10.0.0.1 port 35680 Sep 8 23:46:33.215720 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:33.228877 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:35680.service: Deactivated successfully. Sep 8 23:46:33.231018 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:46:33.231770 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:46:33.237925 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:35694.service - OpenSSH per-connection server daemon (10.0.0.1:35694). Sep 8 23:46:33.238898 systemd-logind[1456]: Removed session 5. Sep 8 23:46:33.277089 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 35694 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:33.278663 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:33.283707 systemd-logind[1456]: New session 6 of user core. Sep 8 23:46:33.296793 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:46:33.348882 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:46:33.349193 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:33.352525 sudo[1622]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:33.357513 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:46:33.357833 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:33.371943 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:33.396332 augenrules[1644]: No rules Sep 8 23:46:33.397916 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:33.398166 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:33.399202 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:33.401253 sshd[1620]: Connection closed by 10.0.0.1 port 35694 Sep 8 23:46:33.401095 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:33.416906 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:35694.service: Deactivated successfully. Sep 8 23:46:33.420631 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:46:33.422052 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:46:33.434015 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:35702.service - OpenSSH per-connection server daemon (10.0.0.1:35702). Sep 8 23:46:33.435051 systemd-logind[1456]: Removed session 6. Sep 8 23:46:33.472526 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 35702 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:33.473942 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:33.478702 systemd-logind[1456]: New session 7 of user core. Sep 8 23:46:33.490776 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:46:33.541766 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:46:33.542041 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:33.846944 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:46:33.847018 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:46:34.079797 dockerd[1677]: time="2025-09-08T23:46:34.079741331Z" level=info msg="Starting up" Sep 8 23:46:34.283303 systemd[1]: var-lib-docker-metacopy\x2dcheck2011939297-merged.mount: Deactivated successfully. Sep 8 23:46:34.292123 dockerd[1677]: time="2025-09-08T23:46:34.292056163Z" level=info msg="Loading containers: start." Sep 8 23:46:34.446764 kernel: Initializing XFRM netlink socket Sep 8 23:46:34.530459 systemd-networkd[1381]: docker0: Link UP Sep 8 23:46:34.574314 dockerd[1677]: time="2025-09-08T23:46:34.574161206Z" level=info msg="Loading containers: done." Sep 8 23:46:34.598547 dockerd[1677]: time="2025-09-08T23:46:34.598457594Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:46:34.598738 dockerd[1677]: time="2025-09-08T23:46:34.598574289Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:46:34.598898 dockerd[1677]: time="2025-09-08T23:46:34.598775596Z" level=info msg="Daemon has completed initialization" Sep 8 23:46:34.635901 dockerd[1677]: time="2025-09-08T23:46:34.635564883Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:46:34.635792 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:46:35.241748 containerd[1478]: time="2025-09-08T23:46:35.241700642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 8 23:46:35.972414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580301266.mount: Deactivated successfully. Sep 8 23:46:37.115895 containerd[1478]: time="2025-09-08T23:46:37.115847364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.116315 containerd[1478]: time="2025-09-08T23:46:37.116271292Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 8 23:46:37.117463 containerd[1478]: time="2025-09-08T23:46:37.117417455Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.120450 containerd[1478]: time="2025-09-08T23:46:37.120410279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.121649 containerd[1478]: time="2025-09-08T23:46:37.121613869Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.879868615s" Sep 8 23:46:37.121696 containerd[1478]: time="2025-09-08T23:46:37.121658336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 8 23:46:37.122831 containerd[1478]: time="2025-09-08T23:46:37.122804937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 8 23:46:38.275974 containerd[1478]: time="2025-09-08T23:46:38.275894612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.276660 containerd[1478]: time="2025-09-08T23:46:38.276611796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 8 23:46:38.277686 containerd[1478]: time="2025-09-08T23:46:38.277639347Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.280567 containerd[1478]: time="2025-09-08T23:46:38.280525274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.282566 containerd[1478]: time="2025-09-08T23:46:38.281768419Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.158929184s" Sep 8 23:46:38.282566 containerd[1478]: time="2025-09-08T23:46:38.281800729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 8 23:46:38.282822 containerd[1478]: time="2025-09-08T23:46:38.282769046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 8 23:46:39.363471 containerd[1478]: time="2025-09-08T23:46:39.363411321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:39.363993 containerd[1478]: time="2025-09-08T23:46:39.363947788Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 8 23:46:39.364923 containerd[1478]: time="2025-09-08T23:46:39.364888668Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:39.367914 containerd[1478]: time="2025-09-08T23:46:39.367875518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:39.369854 containerd[1478]: time="2025-09-08T23:46:39.368993811Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.086164812s" Sep 8 23:46:39.369854 containerd[1478]: time="2025-09-08T23:46:39.369033666Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 8 23:46:39.369854 containerd[1478]: time="2025-09-08T23:46:39.369496966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 8 23:46:39.610855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:46:39.623855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:39.731992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:39.736092 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:46:39.775618 kubelet[1943]: E0908 23:46:39.775540 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:46:39.778905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:46:39.779058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:46:39.779562 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.2M memory peak. Sep 8 23:46:40.402071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194573829.mount: Deactivated successfully. Sep 8 23:46:40.781311 containerd[1478]: time="2025-09-08T23:46:40.781174156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:40.782461 containerd[1478]: time="2025-09-08T23:46:40.782409465Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 8 23:46:40.783595 containerd[1478]: time="2025-09-08T23:46:40.783553160Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:40.785717 containerd[1478]: time="2025-09-08T23:46:40.785668376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:40.786478 containerd[1478]: time="2025-09-08T23:46:40.786284534Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.416758443s" Sep 8 23:46:40.786478 containerd[1478]: time="2025-09-08T23:46:40.786318370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 8 23:46:40.786900 containerd[1478]: time="2025-09-08T23:46:40.786876232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 8 23:46:41.282087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192640862.mount: Deactivated successfully. Sep 8 23:46:42.024159 containerd[1478]: time="2025-09-08T23:46:42.024110233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.025155 containerd[1478]: time="2025-09-08T23:46:42.024899041Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 8 23:46:42.025915 containerd[1478]: time="2025-09-08T23:46:42.025881628Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.030205 containerd[1478]: time="2025-09-08T23:46:42.030147850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.031147 containerd[1478]: time="2025-09-08T23:46:42.030956535Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.24399499s" Sep 8 23:46:42.031147 containerd[1478]: time="2025-09-08T23:46:42.030989264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 8 23:46:42.031470 containerd[1478]: time="2025-09-08T23:46:42.031442919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:46:42.693620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237823964.mount: Deactivated successfully. Sep 8 23:46:42.702985 containerd[1478]: time="2025-09-08T23:46:42.702927004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.703352 containerd[1478]: time="2025-09-08T23:46:42.703308896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:46:42.704238 containerd[1478]: time="2025-09-08T23:46:42.704192538Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.706700 containerd[1478]: time="2025-09-08T23:46:42.706663096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:42.707446 containerd[1478]: time="2025-09-08T23:46:42.707409357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 675.932432ms" Sep 8 23:46:42.707495 containerd[1478]: time="2025-09-08T23:46:42.707443323Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:46:42.708218 containerd[1478]: time="2025-09-08T23:46:42.707986824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 8 23:46:43.153750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756437820.mount: Deactivated successfully. Sep 8 23:46:44.574533 containerd[1478]: time="2025-09-08T23:46:44.574465900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:44.574983 containerd[1478]: time="2025-09-08T23:46:44.574937880Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 8 23:46:44.576032 containerd[1478]: time="2025-09-08T23:46:44.575995024Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:44.580501 containerd[1478]: time="2025-09-08T23:46:44.579506608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:44.581450 containerd[1478]: time="2025-09-08T23:46:44.581298591Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.873275364s" Sep 8 23:46:44.581450 containerd[1478]: time="2025-09-08T23:46:44.581336399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 8 23:46:49.451621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:49.451876 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.2M memory peak. Sep 8 23:46:49.469835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:49.494184 systemd[1]: Reload requested from client PID 2100 ('systemctl') (unit session-7.scope)... Sep 8 23:46:49.494199 systemd[1]: Reloading... Sep 8 23:46:49.561634 zram_generator::config[2145]: No configuration found. Sep 8 23:46:49.711401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:49.784172 systemd[1]: Reloading finished in 289 ms. Sep 8 23:46:49.826898 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:49.829836 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:46:49.830060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:49.830116 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95.1M memory peak. Sep 8 23:46:49.831836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:49.947853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:49.951581 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:46:49.985294 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:49.985294 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:46:49.985294 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:49.985756 kubelet[2191]: I0908 23:46:49.985699 2191 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:46:50.833969 kubelet[2191]: I0908 23:46:50.833922 2191 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:46:50.833969 kubelet[2191]: I0908 23:46:50.833958 2191 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:46:50.834203 kubelet[2191]: I0908 23:46:50.834188 2191 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:46:50.852116 kubelet[2191]: E0908 23:46:50.852060 2191 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:46:50.852668 kubelet[2191]: I0908 23:46:50.852575 2191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:46:50.860102 kubelet[2191]: E0908 23:46:50.860053 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:46:50.860102 kubelet[2191]: I0908 23:46:50.860090 2191 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:46:50.863998 kubelet[2191]: I0908 23:46:50.863964 2191 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:46:50.865633 kubelet[2191]: I0908 23:46:50.865562 2191 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:46:50.865791 kubelet[2191]: I0908 23:46:50.865629 2191 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:46:50.865881 kubelet[2191]: I0908 23:46:50.865848 2191 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:46:50.865881 kubelet[2191]: I0908 23:46:50.865858 2191 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:46:50.866073 kubelet[2191]: I0908 23:46:50.866047 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:50.868994 kubelet[2191]: I0908 23:46:50.868970 2191 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:46:50.869021 kubelet[2191]: I0908 23:46:50.869002 2191 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:46:50.869056 kubelet[2191]: I0908 23:46:50.869030 2191 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:46:50.870216 kubelet[2191]: I0908 23:46:50.870025 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:46:50.871012 kubelet[2191]: E0908 23:46:50.870978 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:46:50.871102 kubelet[2191]: I0908 23:46:50.871083 2191 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:46:50.871836 kubelet[2191]: I0908 23:46:50.871804 2191 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:46:50.871927 kubelet[2191]: E0908 23:46:50.871807 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:46:50.871972 kubelet[2191]: W0908 23:46:50.871922 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:46:50.875967 kubelet[2191]: I0908 23:46:50.875912 2191 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:46:50.876080 kubelet[2191]: I0908 23:46:50.876068 2191 server.go:1289] "Started kubelet" Sep 8 23:46:50.877651 kubelet[2191]: I0908 23:46:50.876230 2191 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:46:50.877651 kubelet[2191]: I0908 23:46:50.877232 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:46:50.877651 kubelet[2191]: I0908 23:46:50.877293 2191 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:46:50.877651 kubelet[2191]: I0908 23:46:50.877517 2191 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:46:50.878139 kubelet[2191]: I0908 23:46:50.878103 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:46:50.878545 kubelet[2191]: I0908 23:46:50.878513 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:46:50.879910 kubelet[2191]: E0908 23:46:50.879888 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:46:50.880017 kubelet[2191]: I0908 23:46:50.880005 2191 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:46:50.880263 kubelet[2191]: I0908 23:46:50.880244 2191 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:46:50.880374 kubelet[2191]: I0908 23:46:50.880363 2191 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:46:50.880827 kubelet[2191]: E0908 23:46:50.880787 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Sep 8 23:46:50.881102 kubelet[2191]: E0908 23:46:50.879915 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186373717d57c728 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:46:50.876028712 +0000 UTC m=+0.921104765,LastTimestamp:2025-09-08 23:46:50.876028712 +0000 UTC m=+0.921104765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:46:50.881819 kubelet[2191]: I0908 23:46:50.881787 2191 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:46:50.882035 kubelet[2191]: I0908 23:46:50.882008 2191 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:46:50.882565 kubelet[2191]: E0908 23:46:50.882533 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:46:50.882839 kubelet[2191]: E0908 23:46:50.882810 2191 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:46:50.883552 kubelet[2191]: I0908 23:46:50.883529 2191 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:46:50.894920 kubelet[2191]: I0908 23:46:50.894891 2191 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:46:50.894920 kubelet[2191]: I0908 23:46:50.894908 2191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:46:50.894920 kubelet[2191]: I0908 23:46:50.894926 2191 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:50.899551 kubelet[2191]: I0908 23:46:50.899488 2191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:46:50.900507 kubelet[2191]: I0908 23:46:50.900479 2191 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:46:50.900507 kubelet[2191]: I0908 23:46:50.900500 2191 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:46:50.900632 kubelet[2191]: I0908 23:46:50.900520 2191 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:46:50.900632 kubelet[2191]: I0908 23:46:50.900529 2191 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:46:50.900632 kubelet[2191]: E0908 23:46:50.900573 2191 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:46:50.901077 kubelet[2191]: E0908 23:46:50.901054 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:46:50.980400 kubelet[2191]: E0908 23:46:50.980365 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:46:51.001604 kubelet[2191]: E0908 23:46:51.001534 2191 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:46:51.080812 kubelet[2191]: E0908 23:46:51.080777 2191 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:46:51.081396 kubelet[2191]: E0908 23:46:51.081365 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Sep 8 23:46:51.104843 kubelet[2191]: I0908 23:46:51.104712 2191 policy_none.go:49] "None policy: Start" Sep 8 23:46:51.104843 kubelet[2191]: I0908 23:46:51.104772 2191 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:46:51.104843 kubelet[2191]: I0908 23:46:51.104788 2191 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:46:51.111265 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:46:51.126732 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:46:51.130011 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:46:51.142629 kubelet[2191]: E0908 23:46:51.142535 2191 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:46:51.142793 kubelet[2191]: I0908 23:46:51.142764 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:46:51.142824 kubelet[2191]: I0908 23:46:51.142782 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:46:51.143043 kubelet[2191]: I0908 23:46:51.143015 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:46:51.143772 kubelet[2191]: E0908 23:46:51.143743 2191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:46:51.143798 kubelet[2191]: E0908 23:46:51.143792 2191 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:46:51.212199 systemd[1]: Created slice kubepods-burstable-podf1a69877431ec2ea85340e11cbf6d503.slice - libcontainer container kubepods-burstable-podf1a69877431ec2ea85340e11cbf6d503.slice. Sep 8 23:46:51.235229 kubelet[2191]: E0908 23:46:51.235192 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:51.238621 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 8 23:46:51.244400 kubelet[2191]: I0908 23:46:51.244360 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:51.244789 kubelet[2191]: E0908 23:46:51.244764 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 8 23:46:51.250982 kubelet[2191]: E0908 23:46:51.250946 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:51.254601 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 8 23:46:51.256267 kubelet[2191]: E0908 23:46:51.256234 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:51.284552 kubelet[2191]: I0908 23:46:51.284503 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:51.284552 kubelet[2191]: I0908 23:46:51.284545 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:51.284704 kubelet[2191]: I0908 23:46:51.284565 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:51.284704 kubelet[2191]: I0908 23:46:51.284581 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:51.284704 kubelet[2191]: I0908 23:46:51.284616 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:51.284704 kubelet[2191]: I0908 23:46:51.284633 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:51.284704 kubelet[2191]: I0908 23:46:51.284649 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:51.284819 kubelet[2191]: I0908 23:46:51.284662 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:51.284819 kubelet[2191]: I0908 23:46:51.284674 2191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:51.446345 kubelet[2191]: I0908 23:46:51.446311 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:51.446708 kubelet[2191]: E0908 23:46:51.446666 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 8 23:46:51.482447 kubelet[2191]: E0908 23:46:51.482401 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Sep 8 23:46:51.536487 containerd[1478]: time="2025-09-08T23:46:51.536437385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1a69877431ec2ea85340e11cbf6d503,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:51.552182 containerd[1478]: time="2025-09-08T23:46:51.552131630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:51.557870 containerd[1478]: time="2025-09-08T23:46:51.557824645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:51.848480 kubelet[2191]: I0908 23:46:51.848371 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:51.848752 kubelet[2191]: E0908 23:46:51.848707 2191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 8 23:46:52.029972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075075713.mount: Deactivated successfully. Sep 8 23:46:52.034144 containerd[1478]: time="2025-09-08T23:46:52.034102446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:52.036252 containerd[1478]: time="2025-09-08T23:46:52.036212847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 8 23:46:52.037040 containerd[1478]: time="2025-09-08T23:46:52.036977777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:52.038256 containerd[1478]: time="2025-09-08T23:46:52.038224240Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:52.039694 containerd[1478]: time="2025-09-08T23:46:52.039655452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:46:52.040724 containerd[1478]: time="2025-09-08T23:46:52.040419743Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:52.041512 containerd[1478]: time="2025-09-08T23:46:52.041468671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:46:52.043649 containerd[1478]: time="2025-09-08T23:46:52.043618706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:52.044563 containerd[1478]: time="2025-09-08T23:46:52.044532188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.007067ms" Sep 8 23:46:52.045915 containerd[1478]: time="2025-09-08T23:46:52.045879216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.664566ms" Sep 8 23:46:52.052338 containerd[1478]: time="2025-09-08T23:46:52.052294642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.393768ms" Sep 8 23:46:52.126491 kubelet[2191]: E0908 23:46:52.126366 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:46:52.146831 containerd[1478]: time="2025-09-08T23:46:52.146525901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:52.146831 containerd[1478]: time="2025-09-08T23:46:52.146760914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:52.146831 containerd[1478]: time="2025-09-08T23:46:52.146772980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.147417 containerd[1478]: time="2025-09-08T23:46:52.147338058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.147492 containerd[1478]: time="2025-09-08T23:46:52.147289873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:52.147492 containerd[1478]: time="2025-09-08T23:46:52.147345409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:52.147492 containerd[1478]: time="2025-09-08T23:46:52.147360992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.147571 containerd[1478]: time="2025-09-08T23:46:52.147455444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.147896 containerd[1478]: time="2025-09-08T23:46:52.147653579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:52.147962 containerd[1478]: time="2025-09-08T23:46:52.147927148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:52.148011 containerd[1478]: time="2025-09-08T23:46:52.147978929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.148163 containerd[1478]: time="2025-09-08T23:46:52.148115494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:52.166895 systemd[1]: Started cri-containerd-1c150171976d17e43d28ae0aa26378d9a34f1b5b0e8ce6612953ef571b347e4b.scope - libcontainer container 1c150171976d17e43d28ae0aa26378d9a34f1b5b0e8ce6612953ef571b347e4b. Sep 8 23:46:52.171573 systemd[1]: Started cri-containerd-674428cc01c40169fdf7b6a04fe9cf6bdfb9daccb42956d2a939e784c2f58fd1.scope - libcontainer container 674428cc01c40169fdf7b6a04fe9cf6bdfb9daccb42956d2a939e784c2f58fd1. Sep 8 23:46:52.173557 systemd[1]: Started cri-containerd-b7e75a1e2ed444fc05d1619366ca3f76bad99c7f53532577f176c77d45843e03.scope - libcontainer container b7e75a1e2ed444fc05d1619366ca3f76bad99c7f53532577f176c77d45843e03. Sep 8 23:46:52.211425 containerd[1478]: time="2025-09-08T23:46:52.211381081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1a69877431ec2ea85340e11cbf6d503,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c150171976d17e43d28ae0aa26378d9a34f1b5b0e8ce6612953ef571b347e4b\"" Sep 8 23:46:52.211689 containerd[1478]: time="2025-09-08T23:46:52.211401897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7e75a1e2ed444fc05d1619366ca3f76bad99c7f53532577f176c77d45843e03\"" Sep 8 23:46:52.214144 containerd[1478]: time="2025-09-08T23:46:52.214099790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"674428cc01c40169fdf7b6a04fe9cf6bdfb9daccb42956d2a939e784c2f58fd1\"" Sep 8 23:46:52.218530 containerd[1478]: time="2025-09-08T23:46:52.218491956Z" level=info msg="CreateContainer within sandbox \"b7e75a1e2ed444fc05d1619366ca3f76bad99c7f53532577f176c77d45843e03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:46:52.220418 containerd[1478]: time="2025-09-08T23:46:52.220362230Z" level=info msg="CreateContainer within sandbox \"674428cc01c40169fdf7b6a04fe9cf6bdfb9daccb42956d2a939e784c2f58fd1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:46:52.222791 containerd[1478]: time="2025-09-08T23:46:52.222637323Z" level=info msg="CreateContainer within sandbox \"1c150171976d17e43d28ae0aa26378d9a34f1b5b0e8ce6612953ef571b347e4b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:46:52.233919 containerd[1478]: time="2025-09-08T23:46:52.233857725Z" level=info msg="CreateContainer within sandbox \"b7e75a1e2ed444fc05d1619366ca3f76bad99c7f53532577f176c77d45843e03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b51b11e0a7176de8122ceafe076d6d6b95644a758a60cbea23e95450cb477dff\"" Sep 8 23:46:52.234798 containerd[1478]: time="2025-09-08T23:46:52.234706480Z" level=info msg="StartContainer for \"b51b11e0a7176de8122ceafe076d6d6b95644a758a60cbea23e95450cb477dff\"" Sep 8 23:46:52.236419 kubelet[2191]: E0908 23:46:52.236389 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:46:52.242075 containerd[1478]: time="2025-09-08T23:46:52.242036586Z" level=info msg="CreateContainer within sandbox \"674428cc01c40169fdf7b6a04fe9cf6bdfb9daccb42956d2a939e784c2f58fd1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55d159087b6c3968c02863876f2d0471af8429aeecbda025e2d755cee3a44c60\"" Sep 8 23:46:52.242695 containerd[1478]: time="2025-09-08T23:46:52.242566464Z" level=info msg="StartContainer for \"55d159087b6c3968c02863876f2d0471af8429aeecbda025e2d755cee3a44c60\"" Sep 8 23:46:52.243852 containerd[1478]: time="2025-09-08T23:46:52.243823075Z" level=info msg="CreateContainer within sandbox \"1c150171976d17e43d28ae0aa26378d9a34f1b5b0e8ce6612953ef571b347e4b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e77cfea3fa2a945080f306100b82711823144f7e3f7debd70f1e2253eab2a7de\"" Sep 8 23:46:52.246004 containerd[1478]: time="2025-09-08T23:46:52.244200765Z" level=info msg="StartContainer for \"e77cfea3fa2a945080f306100b82711823144f7e3f7debd70f1e2253eab2a7de\"" Sep 8 23:46:52.266757 systemd[1]: Started cri-containerd-b51b11e0a7176de8122ceafe076d6d6b95644a758a60cbea23e95450cb477dff.scope - libcontainer container b51b11e0a7176de8122ceafe076d6d6b95644a758a60cbea23e95450cb477dff. Sep 8 23:46:52.273781 systemd[1]: Started cri-containerd-55d159087b6c3968c02863876f2d0471af8429aeecbda025e2d755cee3a44c60.scope - libcontainer container 55d159087b6c3968c02863876f2d0471af8429aeecbda025e2d755cee3a44c60. Sep 8 23:46:52.275258 systemd[1]: Started cri-containerd-e77cfea3fa2a945080f306100b82711823144f7e3f7debd70f1e2253eab2a7de.scope - libcontainer container e77cfea3fa2a945080f306100b82711823144f7e3f7debd70f1e2253eab2a7de. Sep 8 23:46:52.283750 kubelet[2191]: E0908 23:46:52.283706 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Sep 8 23:46:52.301549 kubelet[2191]: E0908 23:46:52.301492 2191 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:46:52.308745 containerd[1478]: time="2025-09-08T23:46:52.308698312Z" level=info msg="StartContainer for \"b51b11e0a7176de8122ceafe076d6d6b95644a758a60cbea23e95450cb477dff\" returns successfully" Sep 8 23:46:52.322815 containerd[1478]: time="2025-09-08T23:46:52.322747658Z" level=info msg="StartContainer for \"e77cfea3fa2a945080f306100b82711823144f7e3f7debd70f1e2253eab2a7de\" returns successfully" Sep 8 23:46:52.322931 containerd[1478]: time="2025-09-08T23:46:52.322853777Z" level=info msg="StartContainer for \"55d159087b6c3968c02863876f2d0471af8429aeecbda025e2d755cee3a44c60\" returns successfully" Sep 8 23:46:52.649920 kubelet[2191]: I0908 23:46:52.649891 2191 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:52.906755 kubelet[2191]: E0908 23:46:52.906655 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:52.909716 kubelet[2191]: E0908 23:46:52.909690 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:52.910962 kubelet[2191]: E0908 23:46:52.910938 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:53.913050 kubelet[2191]: E0908 23:46:53.913017 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:53.913050 kubelet[2191]: E0908 23:46:53.913034 2191 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:54.194361 kubelet[2191]: E0908 23:46:54.194242 2191 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:46:54.253018 kubelet[2191]: I0908 23:46:54.252964 2191 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:46:54.253018 kubelet[2191]: E0908 23:46:54.253007 2191 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:46:54.281486 kubelet[2191]: I0908 23:46:54.281425 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:54.295622 kubelet[2191]: E0908 23:46:54.295501 2191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:54.295622 kubelet[2191]: I0908 23:46:54.295541 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:54.297907 kubelet[2191]: E0908 23:46:54.297879 2191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:54.297907 kubelet[2191]: I0908 23:46:54.297907 2191 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:54.303571 kubelet[2191]: E0908 23:46:54.303524 2191 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:54.872074 kubelet[2191]: I0908 23:46:54.872015 2191 apiserver.go:52] "Watching apiserver" Sep 8 23:46:54.880694 kubelet[2191]: I0908 23:46:54.880652 2191 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:46:56.401977 systemd[1]: Reload requested from client PID 2476 ('systemctl') (unit session-7.scope)... Sep 8 23:46:56.401996 systemd[1]: Reloading... Sep 8 23:46:56.470621 zram_generator::config[2520]: No configuration found. Sep 8 23:46:56.575127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:56.658762 systemd[1]: Reloading finished in 256 ms. Sep 8 23:46:56.677364 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:56.689983 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:46:56.690239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:56.690283 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 130.6M memory peak. Sep 8 23:46:56.700956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:56.805652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:56.809715 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:46:56.841344 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:56.841344 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:46:56.841344 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:56.841719 kubelet[2562]: I0908 23:46:56.841355 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:46:56.849800 kubelet[2562]: I0908 23:46:56.848535 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:46:56.849800 kubelet[2562]: I0908 23:46:56.848615 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:46:56.849800 kubelet[2562]: I0908 23:46:56.848941 2562 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:46:56.852189 kubelet[2562]: I0908 23:46:56.851925 2562 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 8 23:46:56.856836 kubelet[2562]: I0908 23:46:56.856807 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:46:56.860950 kubelet[2562]: E0908 23:46:56.859705 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:46:56.860950 kubelet[2562]: I0908 23:46:56.859738 2562 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:46:56.863105 kubelet[2562]: I0908 23:46:56.863088 2562 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:46:56.863463 kubelet[2562]: I0908 23:46:56.863432 2562 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:46:56.863684 kubelet[2562]: I0908 23:46:56.863528 2562 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:46:56.863822 kubelet[2562]: I0908 23:46:56.863808 2562 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:46:56.863877 kubelet[2562]: I0908 23:46:56.863869 2562 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:46:56.863971 kubelet[2562]: I0908 23:46:56.863960 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:56.864193 kubelet[2562]: I0908 23:46:56.864182 2562 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:46:56.865765 kubelet[2562]: I0908 23:46:56.865745 2562 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:46:56.865874 kubelet[2562]: I0908 23:46:56.865864 2562 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:46:56.865929 kubelet[2562]: I0908 23:46:56.865921 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:46:56.878963 kubelet[2562]: I0908 23:46:56.878934 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:46:56.880834 kubelet[2562]: I0908 23:46:56.880694 2562 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:46:56.882782 kubelet[2562]: I0908 23:46:56.882764 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:46:56.882854 kubelet[2562]: I0908 23:46:56.882801 2562 server.go:1289] "Started kubelet" Sep 8 23:46:56.884375 kubelet[2562]: I0908 23:46:56.884229 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:46:56.885175 kubelet[2562]: I0908 23:46:56.885056 2562 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.885821 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.885967 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.886235 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.886331 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.886529 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.886656 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:46:56.889481 kubelet[2562]: I0908 23:46:56.886827 2562 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:46:56.891472 kubelet[2562]: I0908 23:46:56.891440 2562 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:46:56.892059 kubelet[2562]: I0908 23:46:56.891872 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:46:56.893532 kubelet[2562]: E0908 23:46:56.893496 2562 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:46:56.893931 kubelet[2562]: I0908 23:46:56.893910 2562 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:46:56.900331 kubelet[2562]: I0908 23:46:56.900285 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:46:56.901675 kubelet[2562]: I0908 23:46:56.901653 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:46:56.901736 kubelet[2562]: I0908 23:46:56.901677 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:46:56.901736 kubelet[2562]: I0908 23:46:56.901699 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:46:56.901736 kubelet[2562]: I0908 23:46:56.901709 2562 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:46:56.901736 kubelet[2562]: E0908 23:46:56.901750 2562 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:46:56.927176 kubelet[2562]: I0908 23:46:56.927087 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927347 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927381 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927507 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927516 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927532 2562 policy_none.go:49] "None policy: Start" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927543 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927551 2562 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:46:56.928203 kubelet[2562]: I0908 23:46:56.927647 2562 state_mem.go:75] "Updated machine memory state" Sep 8 23:46:56.934621 kubelet[2562]: E0908 23:46:56.931882 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:46:56.934621 kubelet[2562]: I0908 23:46:56.932030 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:46:56.934621 kubelet[2562]: I0908 23:46:56.932041 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:46:56.934621 kubelet[2562]: I0908 23:46:56.932164 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:46:56.934621 kubelet[2562]: E0908 23:46:56.933544 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:46:57.003311 kubelet[2562]: I0908 23:46:57.003268 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:57.003452 kubelet[2562]: I0908 23:46:57.003330 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.004399 kubelet[2562]: I0908 23:46:57.003282 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.038444 kubelet[2562]: I0908 23:46:57.038419 2562 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:57.057416 kubelet[2562]: I0908 23:46:57.057379 2562 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:46:57.057629 kubelet[2562]: I0908 23:46:57.057615 2562 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:46:57.088070 kubelet[2562]: I0908 23:46:57.088016 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.088070 kubelet[2562]: I0908 23:46:57.088054 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.088070 kubelet[2562]: I0908 23:46:57.088080 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a69877431ec2ea85340e11cbf6d503-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1a69877431ec2ea85340e11cbf6d503\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.088335 kubelet[2562]: I0908 23:46:57.088099 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.088335 kubelet[2562]: I0908 23:46:57.088164 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.088335 kubelet[2562]: I0908 23:46:57.088231 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.088335 kubelet[2562]: I0908 23:46:57.088252 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.088335 kubelet[2562]: I0908 23:46:57.088267 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:57.088441 kubelet[2562]: I0908 23:46:57.088283 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.435828 sudo[2600]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:46:57.436101 sudo[2600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:46:57.869442 kubelet[2562]: I0908 23:46:57.867256 2562 apiserver.go:52] "Watching apiserver" Sep 8 23:46:57.867828 sudo[2600]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:57.888934 kubelet[2562]: I0908 23:46:57.887309 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:46:57.917659 kubelet[2562]: I0908 23:46:57.916324 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.917659 kubelet[2562]: I0908 23:46:57.916404 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.925030 kubelet[2562]: E0908 23:46:57.924864 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:57.926299 kubelet[2562]: E0908 23:46:57.926158 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:57.972446 kubelet[2562]: I0908 23:46:57.972386 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.972360424 podStartE2EDuration="972.360424ms" podCreationTimestamp="2025-09-08 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:57.957437628 +0000 UTC m=+1.144321421" watchObservedRunningTime="2025-09-08 23:46:57.972360424 +0000 UTC m=+1.159244177" Sep 8 23:46:57.983828 kubelet[2562]: I0908 23:46:57.983779 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.98374913 podStartE2EDuration="983.74913ms" podCreationTimestamp="2025-09-08 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:57.973699882 +0000 UTC m=+1.160583675" watchObservedRunningTime="2025-09-08 23:46:57.98374913 +0000 UTC m=+1.170632963" Sep 8 23:46:57.983987 kubelet[2562]: I0908 23:46:57.983930 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.983924985 podStartE2EDuration="983.924985ms" podCreationTimestamp="2025-09-08 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:57.981186679 +0000 UTC m=+1.168070472" watchObservedRunningTime="2025-09-08 23:46:57.983924985 +0000 UTC m=+1.170808778" Sep 8 23:46:59.545898 sudo[1657]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:59.548461 sshd[1656]: Connection closed by 10.0.0.1 port 35702 Sep 8 23:46:59.549272 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:59.552541 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:35702.service: Deactivated successfully. Sep 8 23:46:59.554639 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:46:59.555681 systemd[1]: session-7.scope: Consumed 7.097s CPU time, 255.8M memory peak. Sep 8 23:46:59.556906 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:46:59.557901 systemd-logind[1456]: Removed session 7. Sep 8 23:47:02.817781 kubelet[2562]: I0908 23:47:02.817748 2562 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:47:02.821887 containerd[1478]: time="2025-09-08T23:47:02.821839652Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:47:02.822199 kubelet[2562]: I0908 23:47:02.822135 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:47:03.766780 systemd[1]: Created slice kubepods-besteffort-pod5e33e5c2_e412_4f73_8623_6a7e3bc12371.slice - libcontainer container kubepods-besteffort-pod5e33e5c2_e412_4f73_8623_6a7e3bc12371.slice. Sep 8 23:47:03.794195 systemd[1]: Created slice kubepods-burstable-pod60d368f5_dbf6_4095_b4b7_4bd41b0cf789.slice - libcontainer container kubepods-burstable-pod60d368f5_dbf6_4095_b4b7_4bd41b0cf789.slice. Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833167 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-config-path\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833203 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-kernel\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833220 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-run\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833236 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-bpf-maps\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833256 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-cgroup\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.833610 kubelet[2562]: I0908 23:47:03.833270 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cni-path\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833286 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgsr\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-kube-api-access-5vgsr\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833301 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e33e5c2-e412-4f73-8623-6a7e3bc12371-kube-proxy\") pod \"kube-proxy-78knh\" (UID: \"5e33e5c2-e412-4f73-8623-6a7e3bc12371\") " pod="kube-system/kube-proxy-78knh" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833317 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hostproc\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833331 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-lib-modules\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833345 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-xtables-lock\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834412 kubelet[2562]: I0908 23:47:03.833368 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hubble-tls\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834558 kubelet[2562]: I0908 23:47:03.833383 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-net\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834558 kubelet[2562]: I0908 23:47:03.833398 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e33e5c2-e412-4f73-8623-6a7e3bc12371-xtables-lock\") pod \"kube-proxy-78knh\" (UID: \"5e33e5c2-e412-4f73-8623-6a7e3bc12371\") " pod="kube-system/kube-proxy-78knh" Sep 8 23:47:03.834558 kubelet[2562]: I0908 23:47:03.833411 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e33e5c2-e412-4f73-8623-6a7e3bc12371-lib-modules\") pod \"kube-proxy-78knh\" (UID: \"5e33e5c2-e412-4f73-8623-6a7e3bc12371\") " pod="kube-system/kube-proxy-78knh" Sep 8 23:47:03.834558 kubelet[2562]: I0908 23:47:03.833427 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjblv\" (UniqueName: \"kubernetes.io/projected/5e33e5c2-e412-4f73-8623-6a7e3bc12371-kube-api-access-bjblv\") pod \"kube-proxy-78knh\" (UID: \"5e33e5c2-e412-4f73-8623-6a7e3bc12371\") " pod="kube-system/kube-proxy-78knh" Sep 8 23:47:03.834558 kubelet[2562]: I0908 23:47:03.833444 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-etc-cni-netd\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.834778 kubelet[2562]: I0908 23:47:03.833472 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-clustermesh-secrets\") pod \"cilium-7wthb\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " pod="kube-system/cilium-7wthb" Sep 8 23:47:03.857655 systemd[1]: Created slice kubepods-besteffort-podc8f6a16f_2528_484c_bd4f_70ba36c490fb.slice - libcontainer container kubepods-besteffort-podc8f6a16f_2528_484c_bd4f_70ba36c490fb.slice. Sep 8 23:47:03.934717 kubelet[2562]: I0908 23:47:03.934479 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8f6a16f-2528-484c-bd4f-70ba36c490fb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9wf6g\" (UID: \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\") " pod="kube-system/cilium-operator-6c4d7847fc-9wf6g" Sep 8 23:47:03.934855 kubelet[2562]: I0908 23:47:03.934723 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd68\" (UniqueName: \"kubernetes.io/projected/c8f6a16f-2528-484c-bd4f-70ba36c490fb-kube-api-access-mtd68\") pod \"cilium-operator-6c4d7847fc-9wf6g\" (UID: \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\") " pod="kube-system/cilium-operator-6c4d7847fc-9wf6g" Sep 8 23:47:04.094229 containerd[1478]: time="2025-09-08T23:47:04.094119099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-78knh,Uid:5e33e5c2-e412-4f73-8623-6a7e3bc12371,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:04.102939 containerd[1478]: time="2025-09-08T23:47:04.102896821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wthb,Uid:60d368f5-dbf6-4095-b4b7-4bd41b0cf789,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:04.114100 containerd[1478]: time="2025-09-08T23:47:04.113767288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:47:04.114100 containerd[1478]: time="2025-09-08T23:47:04.113826417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:47:04.114100 containerd[1478]: time="2025-09-08T23:47:04.113853403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.114100 containerd[1478]: time="2025-09-08T23:47:04.113940957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.122700 containerd[1478]: time="2025-09-08T23:47:04.122377858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:47:04.122700 containerd[1478]: time="2025-09-08T23:47:04.122448741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:47:04.122700 containerd[1478]: time="2025-09-08T23:47:04.122463173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.122700 containerd[1478]: time="2025-09-08T23:47:04.122548848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.132783 systemd[1]: Started cri-containerd-3e5328b92af4a1b25197d3693d79a459e3e654735949d8323c76f1d38bb88ae0.scope - libcontainer container 3e5328b92af4a1b25197d3693d79a459e3e654735949d8323c76f1d38bb88ae0. Sep 8 23:47:04.141205 systemd[1]: Started cri-containerd-b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c.scope - libcontainer container b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c. Sep 8 23:47:04.156315 containerd[1478]: time="2025-09-08T23:47:04.156173556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-78knh,Uid:5e33e5c2-e412-4f73-8623-6a7e3bc12371,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e5328b92af4a1b25197d3693d79a459e3e654735949d8323c76f1d38bb88ae0\"" Sep 8 23:47:04.162214 containerd[1478]: time="2025-09-08T23:47:04.162167856Z" level=info msg="CreateContainer within sandbox \"3e5328b92af4a1b25197d3693d79a459e3e654735949d8323c76f1d38bb88ae0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:47:04.165146 containerd[1478]: time="2025-09-08T23:47:04.165073774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9wf6g,Uid:c8f6a16f-2528-484c-bd4f-70ba36c490fb,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:04.172234 containerd[1478]: time="2025-09-08T23:47:04.172201401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wthb,Uid:60d368f5-dbf6-4095-b4b7-4bd41b0cf789,Namespace:kube-system,Attempt:0,} returns sandbox id \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\"" Sep 8 23:47:04.173866 containerd[1478]: time="2025-09-08T23:47:04.173826990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:47:04.228206 containerd[1478]: time="2025-09-08T23:47:04.228069619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:47:04.228206 containerd[1478]: time="2025-09-08T23:47:04.228132745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:47:04.228206 containerd[1478]: time="2025-09-08T23:47:04.228154214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.228571 containerd[1478]: time="2025-09-08T23:47:04.228499034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:04.237279 containerd[1478]: time="2025-09-08T23:47:04.237218906Z" level=info msg="CreateContainer within sandbox \"3e5328b92af4a1b25197d3693d79a459e3e654735949d8323c76f1d38bb88ae0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccb8b644547163fbe30c560b9423d9f2bff85dcad3c15f2ac12802d2b7461441\"" Sep 8 23:47:04.239367 containerd[1478]: time="2025-09-08T23:47:04.238289745Z" level=info msg="StartContainer for \"ccb8b644547163fbe30c560b9423d9f2bff85dcad3c15f2ac12802d2b7461441\"" Sep 8 23:47:04.248765 systemd[1]: Started cri-containerd-f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e.scope - libcontainer container f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e. Sep 8 23:47:04.272797 systemd[1]: Started cri-containerd-ccb8b644547163fbe30c560b9423d9f2bff85dcad3c15f2ac12802d2b7461441.scope - libcontainer container ccb8b644547163fbe30c560b9423d9f2bff85dcad3c15f2ac12802d2b7461441. Sep 8 23:47:04.289039 containerd[1478]: time="2025-09-08T23:47:04.288771704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9wf6g,Uid:c8f6a16f-2528-484c-bd4f-70ba36c490fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\"" Sep 8 23:47:04.303796 containerd[1478]: time="2025-09-08T23:47:04.303757495Z" level=info msg="StartContainer for \"ccb8b644547163fbe30c560b9423d9f2bff85dcad3c15f2ac12802d2b7461441\" returns successfully" Sep 8 23:47:04.958298 kubelet[2562]: I0908 23:47:04.958064 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-78knh" podStartSLOduration=1.958047672 podStartE2EDuration="1.958047672s" podCreationTimestamp="2025-09-08 23:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:47:04.94766483 +0000 UTC m=+8.134548663" watchObservedRunningTime="2025-09-08 23:47:04.958047672 +0000 UTC m=+8.144931465" Sep 8 23:47:09.535304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819508304.mount: Deactivated successfully. Sep 8 23:47:12.100616 update_engine[1457]: I20250908 23:47:12.100154 1457 update_attempter.cc:509] Updating boot flags... Sep 8 23:47:12.150679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2971) Sep 8 23:47:12.202046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2969) Sep 8 23:47:12.904292 containerd[1478]: time="2025-09-08T23:47:12.903738336Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:12.904292 containerd[1478]: time="2025-09-08T23:47:12.903741375Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 8 23:47:12.904804 containerd[1478]: time="2025-09-08T23:47:12.904720629Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:12.906522 containerd[1478]: time="2025-09-08T23:47:12.906484158Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.732610231s" Sep 8 23:47:12.906685 containerd[1478]: time="2025-09-08T23:47:12.906656104Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 8 23:47:12.907646 containerd[1478]: time="2025-09-08T23:47:12.907617764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:47:12.911107 containerd[1478]: time="2025-09-08T23:47:12.910497424Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:47:12.925810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291344013.mount: Deactivated successfully. Sep 8 23:47:12.926096 containerd[1478]: time="2025-09-08T23:47:12.925933000Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\"" Sep 8 23:47:12.927617 containerd[1478]: time="2025-09-08T23:47:12.926639220Z" level=info msg="StartContainer for \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\"" Sep 8 23:47:12.958773 systemd[1]: Started cri-containerd-ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865.scope - libcontainer container ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865. Sep 8 23:47:12.979268 containerd[1478]: time="2025-09-08T23:47:12.979225947Z" level=info msg="StartContainer for \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\" returns successfully" Sep 8 23:47:12.991752 systemd[1]: cri-containerd-ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865.scope: Deactivated successfully. Sep 8 23:47:13.172168 containerd[1478]: time="2025-09-08T23:47:13.156853894Z" level=info msg="shim disconnected" id=ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865 namespace=k8s.io Sep 8 23:47:13.172168 containerd[1478]: time="2025-09-08T23:47:13.172102547Z" level=warning msg="cleaning up after shim disconnected" id=ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865 namespace=k8s.io Sep 8 23:47:13.172168 containerd[1478]: time="2025-09-08T23:47:13.172119702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:13.926676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865-rootfs.mount: Deactivated successfully. Sep 8 23:47:13.972527 containerd[1478]: time="2025-09-08T23:47:13.972477480Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:47:13.993014 containerd[1478]: time="2025-09-08T23:47:13.992973755Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\"" Sep 8 23:47:13.996622 containerd[1478]: time="2025-09-08T23:47:13.995602865Z" level=info msg="StartContainer for \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\"" Sep 8 23:47:14.030812 systemd[1]: Started cri-containerd-e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796.scope - libcontainer container e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796. Sep 8 23:47:14.062267 containerd[1478]: time="2025-09-08T23:47:14.062222340Z" level=info msg="StartContainer for \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\" returns successfully" Sep 8 23:47:14.072887 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:47:14.073314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:14.074097 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:14.083728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:14.084112 systemd[1]: cri-containerd-e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796.scope: Deactivated successfully. Sep 8 23:47:14.098803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:14.139345 containerd[1478]: time="2025-09-08T23:47:14.139277338Z" level=info msg="shim disconnected" id=e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796 namespace=k8s.io Sep 8 23:47:14.139345 containerd[1478]: time="2025-09-08T23:47:14.139336762Z" level=warning msg="cleaning up after shim disconnected" id=e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796 namespace=k8s.io Sep 8 23:47:14.139345 containerd[1478]: time="2025-09-08T23:47:14.139345479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:14.477058 containerd[1478]: time="2025-09-08T23:47:14.476272028Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:14.477058 containerd[1478]: time="2025-09-08T23:47:14.476624851Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 8 23:47:14.477534 containerd[1478]: time="2025-09-08T23:47:14.477457982Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:14.479183 containerd[1478]: time="2025-09-08T23:47:14.479139920Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.571206774s" Sep 8 23:47:14.479283 containerd[1478]: time="2025-09-08T23:47:14.479182749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 8 23:47:14.484566 containerd[1478]: time="2025-09-08T23:47:14.484529800Z" level=info msg="CreateContainer within sandbox \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:47:14.495023 containerd[1478]: time="2025-09-08T23:47:14.494977011Z" level=info msg="CreateContainer within sandbox \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\"" Sep 8 23:47:14.496506 containerd[1478]: time="2025-09-08T23:47:14.495650346Z" level=info msg="StartContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\"" Sep 8 23:47:14.523806 systemd[1]: Started cri-containerd-4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9.scope - libcontainer container 4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9. Sep 8 23:47:14.550980 containerd[1478]: time="2025-09-08T23:47:14.550938922Z" level=info msg="StartContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" returns successfully" Sep 8 23:47:14.924570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796-rootfs.mount: Deactivated successfully. Sep 8 23:47:14.975616 containerd[1478]: time="2025-09-08T23:47:14.975369480Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:47:14.978854 kubelet[2562]: I0908 23:47:14.978523 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9wf6g" podStartSLOduration=1.788019828 podStartE2EDuration="11.978506778s" podCreationTimestamp="2025-09-08 23:47:03 +0000 UTC" firstStartedPulling="2025-09-08 23:47:04.289919623 +0000 UTC m=+7.476803416" lastFinishedPulling="2025-09-08 23:47:14.480406573 +0000 UTC m=+17.667290366" observedRunningTime="2025-09-08 23:47:14.977619302 +0000 UTC m=+18.164503135" watchObservedRunningTime="2025-09-08 23:47:14.978506778 +0000 UTC m=+18.165390571" Sep 8 23:47:14.998746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792768024.mount: Deactivated successfully. Sep 8 23:47:14.999089 containerd[1478]: time="2025-09-08T23:47:14.998851031Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\"" Sep 8 23:47:14.999811 containerd[1478]: time="2025-09-08T23:47:14.999444588Z" level=info msg="StartContainer for \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\"" Sep 8 23:47:15.035852 systemd[1]: Started cri-containerd-fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627.scope - libcontainer container fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627. Sep 8 23:47:15.059356 containerd[1478]: time="2025-09-08T23:47:15.059290423Z" level=info msg="StartContainer for \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\" returns successfully" Sep 8 23:47:15.064000 systemd[1]: cri-containerd-fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627.scope: Deactivated successfully. Sep 8 23:47:15.157150 containerd[1478]: time="2025-09-08T23:47:15.157082485Z" level=info msg="shim disconnected" id=fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627 namespace=k8s.io Sep 8 23:47:15.157580 containerd[1478]: time="2025-09-08T23:47:15.157358254Z" level=warning msg="cleaning up after shim disconnected" id=fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627 namespace=k8s.io Sep 8 23:47:15.157580 containerd[1478]: time="2025-09-08T23:47:15.157375249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:15.923637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627-rootfs.mount: Deactivated successfully. Sep 8 23:47:15.979158 containerd[1478]: time="2025-09-08T23:47:15.979118240Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:47:15.990351 containerd[1478]: time="2025-09-08T23:47:15.990314357Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\"" Sep 8 23:47:15.990927 containerd[1478]: time="2025-09-08T23:47:15.990899767Z" level=info msg="StartContainer for \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\"" Sep 8 23:47:16.018747 systemd[1]: Started cri-containerd-41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de.scope - libcontainer container 41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de. Sep 8 23:47:16.042277 systemd[1]: cri-containerd-41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de.scope: Deactivated successfully. Sep 8 23:47:16.043995 containerd[1478]: time="2025-09-08T23:47:16.043716662Z" level=info msg="StartContainer for \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\" returns successfully" Sep 8 23:47:16.058849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de-rootfs.mount: Deactivated successfully. Sep 8 23:47:16.067116 containerd[1478]: time="2025-09-08T23:47:16.067059588Z" level=info msg="shim disconnected" id=41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de namespace=k8s.io Sep 8 23:47:16.067116 containerd[1478]: time="2025-09-08T23:47:16.067108016Z" level=warning msg="cleaning up after shim disconnected" id=41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de namespace=k8s.io Sep 8 23:47:16.067116 containerd[1478]: time="2025-09-08T23:47:16.067115375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:16.983917 containerd[1478]: time="2025-09-08T23:47:16.983855543Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:47:16.999693 containerd[1478]: time="2025-09-08T23:47:16.999556393Z" level=info msg="CreateContainer within sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\"" Sep 8 23:47:17.000859 containerd[1478]: time="2025-09-08T23:47:17.000806974Z" level=info msg="StartContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\"" Sep 8 23:47:17.028770 systemd[1]: Started cri-containerd-0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c.scope - libcontainer container 0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c. Sep 8 23:47:17.059577 containerd[1478]: time="2025-09-08T23:47:17.059460382Z" level=info msg="StartContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" returns successfully" Sep 8 23:47:17.184872 kubelet[2562]: I0908 23:47:17.184832 2562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:47:17.215581 systemd[1]: Created slice kubepods-burstable-pod0f449741_2387_4e6e_8dbc_71f74c251c4f.slice - libcontainer container kubepods-burstable-pod0f449741_2387_4e6e_8dbc_71f74c251c4f.slice. Sep 8 23:47:17.225939 systemd[1]: Created slice kubepods-burstable-poda6ba9e14_ab71_42ca_acba_154df2f65eea.slice - libcontainer container kubepods-burstable-poda6ba9e14_ab71_42ca_acba_154df2f65eea.slice. Sep 8 23:47:17.230548 kubelet[2562]: I0908 23:47:17.230456 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f449741-2387-4e6e-8dbc-71f74c251c4f-config-volume\") pod \"coredns-674b8bbfcf-5c4pc\" (UID: \"0f449741-2387-4e6e-8dbc-71f74c251c4f\") " pod="kube-system/coredns-674b8bbfcf-5c4pc" Sep 8 23:47:17.230548 kubelet[2562]: I0908 23:47:17.230497 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n2r\" (UniqueName: \"kubernetes.io/projected/0f449741-2387-4e6e-8dbc-71f74c251c4f-kube-api-access-84n2r\") pod \"coredns-674b8bbfcf-5c4pc\" (UID: \"0f449741-2387-4e6e-8dbc-71f74c251c4f\") " pod="kube-system/coredns-674b8bbfcf-5c4pc" Sep 8 23:47:17.331618 kubelet[2562]: I0908 23:47:17.330917 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6ba9e14-ab71-42ca-acba-154df2f65eea-config-volume\") pod \"coredns-674b8bbfcf-vf255\" (UID: \"a6ba9e14-ab71-42ca-acba-154df2f65eea\") " pod="kube-system/coredns-674b8bbfcf-vf255" Sep 8 23:47:17.331618 kubelet[2562]: I0908 23:47:17.330960 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnx9t\" (UniqueName: \"kubernetes.io/projected/a6ba9e14-ab71-42ca-acba-154df2f65eea-kube-api-access-xnx9t\") pod \"coredns-674b8bbfcf-vf255\" (UID: \"a6ba9e14-ab71-42ca-acba-154df2f65eea\") " pod="kube-system/coredns-674b8bbfcf-vf255" Sep 8 23:47:17.520764 containerd[1478]: time="2025-09-08T23:47:17.520717409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5c4pc,Uid:0f449741-2387-4e6e-8dbc-71f74c251c4f,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:17.531554 containerd[1478]: time="2025-09-08T23:47:17.531494771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf255,Uid:a6ba9e14-ab71-42ca-acba-154df2f65eea,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:19.082358 systemd-networkd[1381]: cilium_host: Link UP Sep 8 23:47:19.082474 systemd-networkd[1381]: cilium_net: Link UP Sep 8 23:47:19.082629 systemd-networkd[1381]: cilium_net: Gained carrier Sep 8 23:47:19.082756 systemd-networkd[1381]: cilium_host: Gained carrier Sep 8 23:47:19.106846 systemd-networkd[1381]: cilium_net: Gained IPv6LL Sep 8 23:47:19.169139 systemd-networkd[1381]: cilium_vxlan: Link UP Sep 8 23:47:19.169146 systemd-networkd[1381]: cilium_vxlan: Gained carrier Sep 8 23:47:19.449299 kernel: NET: Registered PF_ALG protocol family Sep 8 23:47:19.610773 systemd-networkd[1381]: cilium_host: Gained IPv6LL Sep 8 23:47:20.056341 systemd-networkd[1381]: lxc_health: Link UP Sep 8 23:47:20.064084 systemd-networkd[1381]: lxc_health: Gained carrier Sep 8 23:47:20.132171 kubelet[2562]: I0908 23:47:20.132107 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7wthb" podStartSLOduration=8.397934816 podStartE2EDuration="17.132092561s" podCreationTimestamp="2025-09-08 23:47:03 +0000 UTC" firstStartedPulling="2025-09-08 23:47:04.173287712 +0000 UTC m=+7.360171465" lastFinishedPulling="2025-09-08 23:47:12.907445417 +0000 UTC m=+16.094329210" observedRunningTime="2025-09-08 23:47:17.999648877 +0000 UTC m=+21.186532670" watchObservedRunningTime="2025-09-08 23:47:20.132092561 +0000 UTC m=+23.318976354" Sep 8 23:47:20.314871 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Sep 8 23:47:20.569388 systemd-networkd[1381]: lxcba4d9e538aa4: Link UP Sep 8 23:47:20.576639 kernel: eth0: renamed from tmp2c836 Sep 8 23:47:20.594634 kernel: eth0: renamed from tmp02ef0 Sep 8 23:47:20.600005 systemd-networkd[1381]: lxcba4d9e538aa4: Gained carrier Sep 8 23:47:20.600171 systemd-networkd[1381]: lxcaa81dda121f9: Link UP Sep 8 23:47:20.600505 systemd-networkd[1381]: lxcaa81dda121f9: Gained carrier Sep 8 23:47:21.338733 systemd-networkd[1381]: lxc_health: Gained IPv6LL Sep 8 23:47:21.658734 systemd-networkd[1381]: lxcba4d9e538aa4: Gained IPv6LL Sep 8 23:47:22.298738 systemd-networkd[1381]: lxcaa81dda121f9: Gained IPv6LL Sep 8 23:47:23.237120 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:40904.service - OpenSSH per-connection server daemon (10.0.0.1:40904). Sep 8 23:47:23.286735 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 40904 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:23.287612 sshd-session[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:23.292297 systemd-logind[1456]: New session 8 of user core. Sep 8 23:47:23.300764 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:47:23.436454 sshd[3808]: Connection closed by 10.0.0.1 port 40904 Sep 8 23:47:23.436957 sshd-session[3806]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:23.440117 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:40904.service: Deactivated successfully. Sep 8 23:47:23.443623 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:47:23.444653 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:47:23.445648 systemd-logind[1456]: Removed session 8. Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183682824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183749814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183761172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183843720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183706380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183795167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:47:24.183930 containerd[1478]: time="2025-09-08T23:47:24.183806766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:24.190430 containerd[1478]: time="2025-09-08T23:47:24.188301119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:47:24.215826 systemd[1]: Started cri-containerd-02ef0fcb9ee2b3bd927fe60ab410a637692c876b37966bf1c3e8a7457ac47117.scope - libcontainer container 02ef0fcb9ee2b3bd927fe60ab410a637692c876b37966bf1c3e8a7457ac47117. Sep 8 23:47:24.217645 systemd[1]: Started cri-containerd-2c836a0ad31bc6ad8c89c296d3684fbbb5b2e400425dcab9073e21bb67a9f05e.scope - libcontainer container 2c836a0ad31bc6ad8c89c296d3684fbbb5b2e400425dcab9073e21bb67a9f05e. Sep 8 23:47:24.228624 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:47:24.231080 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:47:24.251046 containerd[1478]: time="2025-09-08T23:47:24.250997169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5c4pc,Uid:0f449741-2387-4e6e-8dbc-71f74c251c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c836a0ad31bc6ad8c89c296d3684fbbb5b2e400425dcab9073e21bb67a9f05e\"" Sep 8 23:47:24.251647 containerd[1478]: time="2025-09-08T23:47:24.251488339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf255,Uid:a6ba9e14-ab71-42ca-acba-154df2f65eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"02ef0fcb9ee2b3bd927fe60ab410a637692c876b37966bf1c3e8a7457ac47117\"" Sep 8 23:47:24.257713 containerd[1478]: time="2025-09-08T23:47:24.257675688Z" level=info msg="CreateContainer within sandbox \"2c836a0ad31bc6ad8c89c296d3684fbbb5b2e400425dcab9073e21bb67a9f05e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:47:24.258741 containerd[1478]: time="2025-09-08T23:47:24.258698020Z" level=info msg="CreateContainer within sandbox \"02ef0fcb9ee2b3bd927fe60ab410a637692c876b37966bf1c3e8a7457ac47117\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:47:24.276204 containerd[1478]: time="2025-09-08T23:47:24.276149427Z" level=info msg="CreateContainer within sandbox \"2c836a0ad31bc6ad8c89c296d3684fbbb5b2e400425dcab9073e21bb67a9f05e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e328c5d988bbb6c807b307d35b11c087af7b3c420959debeb17a98c77dd5ac0\"" Sep 8 23:47:24.276670 containerd[1478]: time="2025-09-08T23:47:24.276644876Z" level=info msg="StartContainer for \"5e328c5d988bbb6c807b307d35b11c087af7b3c420959debeb17a98c77dd5ac0\"" Sep 8 23:47:24.277619 containerd[1478]: time="2025-09-08T23:47:24.277052017Z" level=info msg="CreateContainer within sandbox \"02ef0fcb9ee2b3bd927fe60ab410a637692c876b37966bf1c3e8a7457ac47117\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7f869d5e8fc498a4ca650ee8d1e99308ea7d18bc8684020cd88dc7ae7a02c28\"" Sep 8 23:47:24.277958 containerd[1478]: time="2025-09-08T23:47:24.277752476Z" level=info msg="StartContainer for \"f7f869d5e8fc498a4ca650ee8d1e99308ea7d18bc8684020cd88dc7ae7a02c28\"" Sep 8 23:47:24.301775 systemd[1]: Started cri-containerd-f7f869d5e8fc498a4ca650ee8d1e99308ea7d18bc8684020cd88dc7ae7a02c28.scope - libcontainer container f7f869d5e8fc498a4ca650ee8d1e99308ea7d18bc8684020cd88dc7ae7a02c28. Sep 8 23:47:24.304238 systemd[1]: Started cri-containerd-5e328c5d988bbb6c807b307d35b11c087af7b3c420959debeb17a98c77dd5ac0.scope - libcontainer container 5e328c5d988bbb6c807b307d35b11c087af7b3c420959debeb17a98c77dd5ac0. Sep 8 23:47:24.335946 containerd[1478]: time="2025-09-08T23:47:24.335894023Z" level=info msg="StartContainer for \"f7f869d5e8fc498a4ca650ee8d1e99308ea7d18bc8684020cd88dc7ae7a02c28\" returns successfully" Sep 8 23:47:24.336070 containerd[1478]: time="2025-09-08T23:47:24.335890823Z" level=info msg="StartContainer for \"5e328c5d988bbb6c807b307d35b11c087af7b3c420959debeb17a98c77dd5ac0\" returns successfully" Sep 8 23:47:25.046921 kubelet[2562]: I0908 23:47:25.045937 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vf255" podStartSLOduration=22.044572749 podStartE2EDuration="22.044572749s" podCreationTimestamp="2025-09-08 23:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:47:25.023738922 +0000 UTC m=+28.210622715" watchObservedRunningTime="2025-09-08 23:47:25.044572749 +0000 UTC m=+28.231456582" Sep 8 23:47:25.063382 kubelet[2562]: I0908 23:47:25.062995 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5c4pc" podStartSLOduration=22.062976545 podStartE2EDuration="22.062976545s" podCreationTimestamp="2025-09-08 23:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:47:25.062481451 +0000 UTC m=+28.249365244" watchObservedRunningTime="2025-09-08 23:47:25.062976545 +0000 UTC m=+28.249860338" Sep 8 23:47:28.466894 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:40918.service - OpenSSH per-connection server daemon (10.0.0.1:40918). Sep 8 23:47:28.520165 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 40918 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:28.521049 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:28.527855 systemd-logind[1456]: New session 9 of user core. Sep 8 23:47:28.535837 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:47:28.668747 sshd[4001]: Connection closed by 10.0.0.1 port 40918 Sep 8 23:47:28.669077 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:28.673150 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:40918.service: Deactivated successfully. Sep 8 23:47:28.675775 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:47:28.676518 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:47:28.678019 systemd-logind[1456]: Removed session 9. Sep 8 23:47:33.684579 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:50644.service - OpenSSH per-connection server daemon (10.0.0.1:50644). Sep 8 23:47:33.737760 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 50644 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:33.740126 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:33.746689 systemd-logind[1456]: New session 10 of user core. Sep 8 23:47:33.751782 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:47:33.877734 sshd[4018]: Connection closed by 10.0.0.1 port 50644 Sep 8 23:47:33.879234 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:33.882703 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:50644.service: Deactivated successfully. Sep 8 23:47:33.885870 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:47:33.887137 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:47:33.888500 systemd-logind[1456]: Removed session 10. Sep 8 23:47:38.890731 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Sep 8 23:47:38.939151 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:38.941090 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:38.945015 systemd-logind[1456]: New session 11 of user core. Sep 8 23:47:38.954782 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:47:39.091250 sshd[4038]: Connection closed by 10.0.0.1 port 50650 Sep 8 23:47:39.091788 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:39.095854 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:50650.service: Deactivated successfully. Sep 8 23:47:39.097581 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:47:39.100405 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:47:39.101347 systemd-logind[1456]: Removed session 11. Sep 8 23:47:44.108439 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:58924.service - OpenSSH per-connection server daemon (10.0.0.1:58924). Sep 8 23:47:44.173745 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 58924 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:44.175113 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:44.178962 systemd-logind[1456]: New session 12 of user core. Sep 8 23:47:44.185753 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:47:44.307389 sshd[4054]: Connection closed by 10.0.0.1 port 58924 Sep 8 23:47:44.307838 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:44.311193 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:58924.service: Deactivated successfully. Sep 8 23:47:44.313501 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:47:44.314931 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:47:44.315763 systemd-logind[1456]: Removed session 12. Sep 8 23:47:49.319708 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:58940.service - OpenSSH per-connection server daemon (10.0.0.1:58940). Sep 8 23:47:49.382468 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 58940 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:49.384250 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:49.391063 systemd-logind[1456]: New session 13 of user core. Sep 8 23:47:49.398793 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:47:49.534131 sshd[4070]: Connection closed by 10.0.0.1 port 58940 Sep 8 23:47:49.532815 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:49.537540 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:58940.service: Deactivated successfully. Sep 8 23:47:49.539893 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:47:49.540811 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:47:49.541643 systemd-logind[1456]: Removed session 13. Sep 8 23:47:54.547936 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:49298.service - OpenSSH per-connection server daemon (10.0.0.1:49298). Sep 8 23:47:54.588451 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 49298 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:54.589690 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:54.594992 systemd-logind[1456]: New session 14 of user core. Sep 8 23:47:54.604763 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:47:54.741355 sshd[4088]: Connection closed by 10.0.0.1 port 49298 Sep 8 23:47:54.740411 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:54.752137 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:49298.service: Deactivated successfully. Sep 8 23:47:54.755027 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:47:54.758211 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:47:54.764972 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:49312.service - OpenSSH per-connection server daemon (10.0.0.1:49312). Sep 8 23:47:54.770240 systemd-logind[1456]: Removed session 14. Sep 8 23:47:54.806770 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 49312 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:54.808522 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:54.814697 systemd-logind[1456]: New session 15 of user core. Sep 8 23:47:54.821782 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:47:54.995832 sshd[4104]: Connection closed by 10.0.0.1 port 49312 Sep 8 23:47:54.996669 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:55.010443 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:49312.service: Deactivated successfully. Sep 8 23:47:55.012136 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:47:55.014642 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:47:55.024964 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:49314.service - OpenSSH per-connection server daemon (10.0.0.1:49314). Sep 8 23:47:55.028680 systemd-logind[1456]: Removed session 15. Sep 8 23:47:55.077340 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 49314 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:47:55.078902 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:55.083231 systemd-logind[1456]: New session 16 of user core. Sep 8 23:47:55.091802 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:47:55.206502 sshd[4120]: Connection closed by 10.0.0.1 port 49314 Sep 8 23:47:55.206940 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:55.210317 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:49314.service: Deactivated successfully. Sep 8 23:47:55.212065 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:47:55.212637 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:47:55.213524 systemd-logind[1456]: Removed session 16. Sep 8 23:48:00.218564 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:41598.service - OpenSSH per-connection server daemon (10.0.0.1:41598). Sep 8 23:48:00.262577 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:00.262910 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:00.269191 systemd-logind[1456]: New session 17 of user core. Sep 8 23:48:00.273809 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:48:00.398232 sshd[4137]: Connection closed by 10.0.0.1 port 41598 Sep 8 23:48:00.398599 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:00.401864 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:41598.service: Deactivated successfully. Sep 8 23:48:00.403786 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:48:00.404575 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:48:00.405508 systemd-logind[1456]: Removed session 17. Sep 8 23:48:05.410200 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:41604.service - OpenSSH per-connection server daemon (10.0.0.1:41604). Sep 8 23:48:05.456886 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 41604 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:05.458337 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:05.462192 systemd-logind[1456]: New session 18 of user core. Sep 8 23:48:05.476812 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:48:05.594673 sshd[4155]: Connection closed by 10.0.0.1 port 41604 Sep 8 23:48:05.595250 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:05.605055 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:41604.service: Deactivated successfully. Sep 8 23:48:05.608141 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:48:05.608848 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:48:05.617209 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:41620.service - OpenSSH per-connection server daemon (10.0.0.1:41620). Sep 8 23:48:05.618742 systemd-logind[1456]: Removed session 18. Sep 8 23:48:05.664834 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 41620 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:05.666085 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:05.670311 systemd-logind[1456]: New session 19 of user core. Sep 8 23:48:05.685774 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:48:05.875801 sshd[4170]: Connection closed by 10.0.0.1 port 41620 Sep 8 23:48:05.876180 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:05.893067 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:41620.service: Deactivated successfully. Sep 8 23:48:05.894975 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:48:05.896737 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:48:05.904932 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:41628.service - OpenSSH per-connection server daemon (10.0.0.1:41628). Sep 8 23:48:05.906143 systemd-logind[1456]: Removed session 19. Sep 8 23:48:05.949173 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 41628 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:05.950528 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:05.955574 systemd-logind[1456]: New session 20 of user core. Sep 8 23:48:05.965783 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:48:06.571623 sshd[4184]: Connection closed by 10.0.0.1 port 41628 Sep 8 23:48:06.571164 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:06.593163 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:41628.service: Deactivated successfully. Sep 8 23:48:06.595583 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:48:06.596837 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:48:06.608351 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:41630.service - OpenSSH per-connection server daemon (10.0.0.1:41630). Sep 8 23:48:06.609298 systemd-logind[1456]: Removed session 20. Sep 8 23:48:06.652556 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 41630 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:06.654242 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:06.659301 systemd-logind[1456]: New session 21 of user core. Sep 8 23:48:06.674772 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:48:06.908281 sshd[4207]: Connection closed by 10.0.0.1 port 41630 Sep 8 23:48:06.908794 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:06.923476 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:41630.service: Deactivated successfully. Sep 8 23:48:06.927092 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:48:06.929816 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:48:06.939144 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:41638.service - OpenSSH per-connection server daemon (10.0.0.1:41638). Sep 8 23:48:06.940163 systemd-logind[1456]: Removed session 21. Sep 8 23:48:06.981479 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 41638 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:06.982855 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:06.986912 systemd-logind[1456]: New session 22 of user core. Sep 8 23:48:06.998793 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:48:07.113593 sshd[4221]: Connection closed by 10.0.0.1 port 41638 Sep 8 23:48:07.114003 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:07.117866 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:41638.service: Deactivated successfully. Sep 8 23:48:07.120074 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:48:07.121017 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:48:07.121960 systemd-logind[1456]: Removed session 22. Sep 8 23:48:12.124979 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Sep 8 23:48:12.165441 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:12.166624 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:12.170200 systemd-logind[1456]: New session 23 of user core. Sep 8 23:48:12.176739 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:48:12.281528 sshd[4238]: Connection closed by 10.0.0.1 port 51968 Sep 8 23:48:12.282230 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:12.285872 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:51968.service: Deactivated successfully. Sep 8 23:48:12.287735 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:48:12.290070 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:48:12.290847 systemd-logind[1456]: Removed session 23. Sep 8 23:48:17.312851 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Sep 8 23:48:17.351060 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:17.352426 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:17.356388 systemd-logind[1456]: New session 24 of user core. Sep 8 23:48:17.367736 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:48:17.495951 sshd[4253]: Connection closed by 10.0.0.1 port 51970 Sep 8 23:48:17.496570 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:17.507403 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:51970.service: Deactivated successfully. Sep 8 23:48:17.508847 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:48:17.509992 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:48:17.521094 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Sep 8 23:48:17.524792 systemd-logind[1456]: Removed session 24. Sep 8 23:48:17.561168 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:17.562647 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:17.566229 systemd-logind[1456]: New session 25 of user core. Sep 8 23:48:17.576720 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:48:19.239968 containerd[1478]: time="2025-09-08T23:48:19.239893733Z" level=info msg="StopContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" with timeout 30 (s)" Sep 8 23:48:19.240992 containerd[1478]: time="2025-09-08T23:48:19.240661605Z" level=info msg="Stop container \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" with signal terminated" Sep 8 23:48:19.249327 systemd[1]: cri-containerd-4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9.scope: Deactivated successfully. Sep 8 23:48:19.275249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9-rootfs.mount: Deactivated successfully. Sep 8 23:48:19.288519 containerd[1478]: time="2025-09-08T23:48:19.288450213Z" level=info msg="shim disconnected" id=4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9 namespace=k8s.io Sep 8 23:48:19.288519 containerd[1478]: time="2025-09-08T23:48:19.288515653Z" level=warning msg="cleaning up after shim disconnected" id=4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9 namespace=k8s.io Sep 8 23:48:19.288519 containerd[1478]: time="2025-09-08T23:48:19.288524253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:19.306023 containerd[1478]: time="2025-09-08T23:48:19.305990666Z" level=info msg="StopContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" with timeout 2 (s)" Sep 8 23:48:19.306884 containerd[1478]: time="2025-09-08T23:48:19.306605219Z" level=info msg="Stop container \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" with signal terminated" Sep 8 23:48:19.313480 systemd-networkd[1381]: lxc_health: Link DOWN Sep 8 23:48:19.313485 systemd-networkd[1381]: lxc_health: Lost carrier Sep 8 23:48:19.328539 containerd[1478]: time="2025-09-08T23:48:19.328482785Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:48:19.330148 systemd[1]: cri-containerd-0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c.scope: Deactivated successfully. Sep 8 23:48:19.330511 systemd[1]: cri-containerd-0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c.scope: Consumed 6.392s CPU time, 128.7M memory peak, 136K read from disk, 12.9M written to disk. Sep 8 23:48:19.346096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c-rootfs.mount: Deactivated successfully. Sep 8 23:48:19.353879 containerd[1478]: time="2025-09-08T23:48:19.353675075Z" level=info msg="shim disconnected" id=0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c namespace=k8s.io Sep 8 23:48:19.353879 containerd[1478]: time="2025-09-08T23:48:19.353814193Z" level=warning msg="cleaning up after shim disconnected" id=0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c namespace=k8s.io Sep 8 23:48:19.353879 containerd[1478]: time="2025-09-08T23:48:19.353828073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:19.361966 containerd[1478]: time="2025-09-08T23:48:19.361873027Z" level=info msg="StopContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" returns successfully" Sep 8 23:48:19.362486 containerd[1478]: time="2025-09-08T23:48:19.362453341Z" level=info msg="StopPodSandbox for \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\"" Sep 8 23:48:19.362532 containerd[1478]: time="2025-09-08T23:48:19.362495540Z" level=info msg="Container to stop \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.364552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e-shm.mount: Deactivated successfully. Sep 8 23:48:19.372366 containerd[1478]: time="2025-09-08T23:48:19.372328875Z" level=info msg="StopContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" returns successfully" Sep 8 23:48:19.372769 containerd[1478]: time="2025-09-08T23:48:19.372746671Z" level=info msg="StopPodSandbox for \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\"" Sep 8 23:48:19.372822 containerd[1478]: time="2025-09-08T23:48:19.372779550Z" level=info msg="Container to stop \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.372822 containerd[1478]: time="2025-09-08T23:48:19.372790430Z" level=info msg="Container to stop \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.372822 containerd[1478]: time="2025-09-08T23:48:19.372799190Z" level=info msg="Container to stop \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.372822 containerd[1478]: time="2025-09-08T23:48:19.372806990Z" level=info msg="Container to stop \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.372822 containerd[1478]: time="2025-09-08T23:48:19.372814230Z" level=info msg="Container to stop \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:48:19.374357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c-shm.mount: Deactivated successfully. Sep 8 23:48:19.377050 systemd[1]: cri-containerd-f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e.scope: Deactivated successfully. Sep 8 23:48:19.378837 systemd[1]: cri-containerd-b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c.scope: Deactivated successfully. Sep 8 23:48:19.405185 containerd[1478]: time="2025-09-08T23:48:19.405121644Z" level=info msg="shim disconnected" id=f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e namespace=k8s.io Sep 8 23:48:19.405185 containerd[1478]: time="2025-09-08T23:48:19.405182723Z" level=warning msg="cleaning up after shim disconnected" id=f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e namespace=k8s.io Sep 8 23:48:19.405185 containerd[1478]: time="2025-09-08T23:48:19.405191683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:19.420755 containerd[1478]: time="2025-09-08T23:48:19.420708437Z" level=info msg="TearDown network for sandbox \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\" successfully" Sep 8 23:48:19.420755 containerd[1478]: time="2025-09-08T23:48:19.420746717Z" level=info msg="StopPodSandbox for \"f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e\" returns successfully" Sep 8 23:48:19.428734 containerd[1478]: time="2025-09-08T23:48:19.428636712Z" level=info msg="shim disconnected" id=b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c namespace=k8s.io Sep 8 23:48:19.429006 containerd[1478]: time="2025-09-08T23:48:19.428980868Z" level=warning msg="cleaning up after shim disconnected" id=b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c namespace=k8s.io Sep 8 23:48:19.429006 containerd[1478]: time="2025-09-08T23:48:19.429004108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:19.441507 containerd[1478]: time="2025-09-08T23:48:19.441463215Z" level=warning msg="cleanup warnings time=\"2025-09-08T23:48:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 8 23:48:19.442516 containerd[1478]: time="2025-09-08T23:48:19.442415245Z" level=info msg="TearDown network for sandbox \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" successfully" Sep 8 23:48:19.442516 containerd[1478]: time="2025-09-08T23:48:19.442439804Z" level=info msg="StopPodSandbox for \"b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c\" returns successfully" Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535123 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cni-path\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535176 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hostproc\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535203 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hubble-tls\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535221 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8f6a16f-2528-484c-bd4f-70ba36c490fb-cilium-config-path\") pod \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\" (UID: \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\") " Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535238 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-run\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.535979 kubelet[2562]: I0908 23:48:19.535254 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-kernel\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535270 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-config-path\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535282 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-cgroup\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535295 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-lib-modules\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535309 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-etc-cni-netd\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535355 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtd68\" (UniqueName: \"kubernetes.io/projected/c8f6a16f-2528-484c-bd4f-70ba36c490fb-kube-api-access-mtd68\") pod \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\" (UID: \"c8f6a16f-2528-484c-bd4f-70ba36c490fb\") " Sep 8 23:48:19.536821 kubelet[2562]: I0908 23:48:19.535376 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-clustermesh-secrets\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536950 kubelet[2562]: I0908 23:48:19.535391 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-bpf-maps\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536950 kubelet[2562]: I0908 23:48:19.535407 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vgsr\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-kube-api-access-5vgsr\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536950 kubelet[2562]: I0908 23:48:19.535426 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-xtables-lock\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.536950 kubelet[2562]: I0908 23:48:19.535440 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-net\") pod \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\" (UID: \"60d368f5-dbf6-4095-b4b7-4bd41b0cf789\") " Sep 8 23:48:19.537424 kubelet[2562]: I0908 23:48:19.537380 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.537459 kubelet[2562]: I0908 23:48:19.537431 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.537459 kubelet[2562]: I0908 23:48:19.537445 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.537611 kubelet[2562]: I0908 23:48:19.537460 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.538364 kubelet[2562]: I0908 23:48:19.538016 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.538364 kubelet[2562]: I0908 23:48:19.538056 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hostproc" (OuterVolumeSpecName: "hostproc") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.538364 kubelet[2562]: I0908 23:48:19.538063 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cni-path" (OuterVolumeSpecName: "cni-path") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.538364 kubelet[2562]: I0908 23:48:19.538102 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.540017 kubelet[2562]: I0908 23:48:19.539104 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:48:19.540017 kubelet[2562]: I0908 23:48:19.539151 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.540145 kubelet[2562]: I0908 23:48:19.540126 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:48:19.540806 kubelet[2562]: I0908 23:48:19.540759 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:48:19.540888 kubelet[2562]: I0908 23:48:19.540870 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f6a16f-2528-484c-bd4f-70ba36c490fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8f6a16f-2528-484c-bd4f-70ba36c490fb" (UID: "c8f6a16f-2528-484c-bd4f-70ba36c490fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:48:19.541541 kubelet[2562]: I0908 23:48:19.541517 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f6a16f-2528-484c-bd4f-70ba36c490fb-kube-api-access-mtd68" (OuterVolumeSpecName: "kube-api-access-mtd68") pod "c8f6a16f-2528-484c-bd4f-70ba36c490fb" (UID: "c8f6a16f-2528-484c-bd4f-70ba36c490fb"). InnerVolumeSpecName "kube-api-access-mtd68". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:48:19.541676 kubelet[2562]: I0908 23:48:19.541534 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:48:19.543093 kubelet[2562]: I0908 23:48:19.543040 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-kube-api-access-5vgsr" (OuterVolumeSpecName: "kube-api-access-5vgsr") pod "60d368f5-dbf6-4095-b4b7-4bd41b0cf789" (UID: "60d368f5-dbf6-4095-b4b7-4bd41b0cf789"). InnerVolumeSpecName "kube-api-access-5vgsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636272 2562 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636305 2562 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636317 2562 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636325 2562 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636333 2562 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636341 2562 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636328 kubelet[2562]: I0908 23:48:19.636348 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mtd68\" (UniqueName: \"kubernetes.io/projected/c8f6a16f-2528-484c-bd4f-70ba36c490fb-kube-api-access-mtd68\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636356 2562 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636365 2562 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636372 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vgsr\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-kube-api-access-5vgsr\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636381 2562 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636389 2562 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636397 2562 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636404 2562 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636653 kubelet[2562]: I0908 23:48:19.636411 2562 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60d368f5-dbf6-4095-b4b7-4bd41b0cf789-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:19.636811 kubelet[2562]: I0908 23:48:19.636418 2562 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8f6a16f-2528-484c-bd4f-70ba36c490fb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:48:20.133091 kubelet[2562]: I0908 23:48:20.133042 2562 scope.go:117] "RemoveContainer" containerID="4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9" Sep 8 23:48:20.136384 containerd[1478]: time="2025-09-08T23:48:20.136074205Z" level=info msg="RemoveContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\"" Sep 8 23:48:20.141497 containerd[1478]: time="2025-09-08T23:48:20.139931565Z" level=info msg="RemoveContainer for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" returns successfully" Sep 8 23:48:20.141129 systemd[1]: Removed slice kubepods-besteffort-podc8f6a16f_2528_484c_bd4f_70ba36c490fb.slice - libcontainer container kubepods-besteffort-podc8f6a16f_2528_484c_bd4f_70ba36c490fb.slice. Sep 8 23:48:20.143013 kubelet[2562]: I0908 23:48:20.142976 2562 scope.go:117] "RemoveContainer" containerID="4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9" Sep 8 23:48:20.143568 containerd[1478]: time="2025-09-08T23:48:20.143368048Z" level=error msg="ContainerStatus for \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\": not found" Sep 8 23:48:20.143659 kubelet[2562]: E0908 23:48:20.143634 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\": not found" containerID="4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9" Sep 8 23:48:20.143956 kubelet[2562]: I0908 23:48:20.143793 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9"} err="failed to get container status \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b474b56f5ab3c4aabdf4d8db86f5badb0e2e6c3c6bfd31b3b5c9640f8d9b2b9\": not found" Sep 8 23:48:20.143956 kubelet[2562]: I0908 23:48:20.143834 2562 scope.go:117] "RemoveContainer" containerID="0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c" Sep 8 23:48:20.145060 containerd[1478]: time="2025-09-08T23:48:20.144983472Z" level=info msg="RemoveContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\"" Sep 8 23:48:20.147637 systemd[1]: Removed slice kubepods-burstable-pod60d368f5_dbf6_4095_b4b7_4bd41b0cf789.slice - libcontainer container kubepods-burstable-pod60d368f5_dbf6_4095_b4b7_4bd41b0cf789.slice. Sep 8 23:48:20.147913 systemd[1]: kubepods-burstable-pod60d368f5_dbf6_4095_b4b7_4bd41b0cf789.slice: Consumed 6.470s CPU time, 129M memory peak, 148K read from disk, 12.9M written to disk. Sep 8 23:48:20.150032 containerd[1478]: time="2025-09-08T23:48:20.149979899Z" level=info msg="RemoveContainer for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" returns successfully" Sep 8 23:48:20.150237 kubelet[2562]: I0908 23:48:20.150215 2562 scope.go:117] "RemoveContainer" containerID="41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de" Sep 8 23:48:20.151764 containerd[1478]: time="2025-09-08T23:48:20.151728241Z" level=info msg="RemoveContainer for \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\"" Sep 8 23:48:20.154678 containerd[1478]: time="2025-09-08T23:48:20.154582411Z" level=info msg="RemoveContainer for \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\" returns successfully" Sep 8 23:48:20.155922 kubelet[2562]: I0908 23:48:20.155100 2562 scope.go:117] "RemoveContainer" containerID="fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627" Sep 8 23:48:20.157305 containerd[1478]: time="2025-09-08T23:48:20.157258383Z" level=info msg="RemoveContainer for \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\"" Sep 8 23:48:20.159887 containerd[1478]: time="2025-09-08T23:48:20.159862675Z" level=info msg="RemoveContainer for \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\" returns successfully" Sep 8 23:48:20.160124 kubelet[2562]: I0908 23:48:20.160098 2562 scope.go:117] "RemoveContainer" containerID="e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796" Sep 8 23:48:20.161086 containerd[1478]: time="2025-09-08T23:48:20.161064903Z" level=info msg="RemoveContainer for \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\"" Sep 8 23:48:20.163498 containerd[1478]: time="2025-09-08T23:48:20.163420998Z" level=info msg="RemoveContainer for \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\" returns successfully" Sep 8 23:48:20.163764 kubelet[2562]: I0908 23:48:20.163722 2562 scope.go:117] "RemoveContainer" containerID="ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865" Sep 8 23:48:20.164719 containerd[1478]: time="2025-09-08T23:48:20.164697065Z" level=info msg="RemoveContainer for \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\"" Sep 8 23:48:20.167221 containerd[1478]: time="2025-09-08T23:48:20.167183559Z" level=info msg="RemoveContainer for \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\" returns successfully" Sep 8 23:48:20.167760 kubelet[2562]: I0908 23:48:20.167708 2562 scope.go:117] "RemoveContainer" containerID="0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c" Sep 8 23:48:20.167978 containerd[1478]: time="2025-09-08T23:48:20.167948591Z" level=error msg="ContainerStatus for \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\": not found" Sep 8 23:48:20.168080 kubelet[2562]: E0908 23:48:20.168061 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\": not found" containerID="0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c" Sep 8 23:48:20.168175 kubelet[2562]: I0908 23:48:20.168088 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c"} err="failed to get container status \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e9385b6cd42d2879c66bb5634f0f2a49f49d28604ec477dab7625c21a1a6c1c\": not found" Sep 8 23:48:20.168175 kubelet[2562]: I0908 23:48:20.168107 2562 scope.go:117] "RemoveContainer" containerID="41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de" Sep 8 23:48:20.168412 containerd[1478]: time="2025-09-08T23:48:20.168345586Z" level=error msg="ContainerStatus for \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\": not found" Sep 8 23:48:20.168484 kubelet[2562]: E0908 23:48:20.168460 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\": not found" containerID="41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de" Sep 8 23:48:20.168519 kubelet[2562]: I0908 23:48:20.168482 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de"} err="failed to get container status \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\": rpc error: code = NotFound desc = an error occurred when try to find container \"41e3c533765d1305170e3c2ead896a0dadcaf3314ce238abfdad1fcb63d624de\": not found" Sep 8 23:48:20.168519 kubelet[2562]: I0908 23:48:20.168494 2562 scope.go:117] "RemoveContainer" containerID="fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627" Sep 8 23:48:20.168725 containerd[1478]: time="2025-09-08T23:48:20.168694183Z" level=error msg="ContainerStatus for \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\": not found" Sep 8 23:48:20.168831 kubelet[2562]: E0908 23:48:20.168813 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\": not found" containerID="fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627" Sep 8 23:48:20.168868 kubelet[2562]: I0908 23:48:20.168839 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627"} err="failed to get container status \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb0e37305782736152e3f47b37aa4e891350a072e5d6dcace6e71982c4710627\": not found" Sep 8 23:48:20.168868 kubelet[2562]: I0908 23:48:20.168856 2562 scope.go:117] "RemoveContainer" containerID="e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796" Sep 8 23:48:20.169054 containerd[1478]: time="2025-09-08T23:48:20.169028139Z" level=error msg="ContainerStatus for \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\": not found" Sep 8 23:48:20.169250 kubelet[2562]: E0908 23:48:20.169230 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\": not found" containerID="e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796" Sep 8 23:48:20.169290 kubelet[2562]: I0908 23:48:20.169255 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796"} err="failed to get container status \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5a2c07deb3eedc70d062bf9d3baf99e52150ed9ff1b734404f9eacd5c80a796\": not found" Sep 8 23:48:20.169290 kubelet[2562]: I0908 23:48:20.169286 2562 scope.go:117] "RemoveContainer" containerID="ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865" Sep 8 23:48:20.169458 containerd[1478]: time="2025-09-08T23:48:20.169432815Z" level=error msg="ContainerStatus for \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\": not found" Sep 8 23:48:20.169558 kubelet[2562]: E0908 23:48:20.169536 2562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\": not found" containerID="ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865" Sep 8 23:48:20.169614 kubelet[2562]: I0908 23:48:20.169561 2562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865"} err="failed to get container status \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad6eb42b19f68d6c2cdb6e94b07e73fbd8afec3fbc44cecb3cbbf79764224865\": not found" Sep 8 23:48:20.275009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f195669664eb51ce39795aff600eba1847180874661eb509a80df4698f60c03e-rootfs.mount: Deactivated successfully. Sep 8 23:48:20.275105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b094be63b4deebbb74af97968843189dc44e948bea0894673e1aec9ad172d59c-rootfs.mount: Deactivated successfully. Sep 8 23:48:20.275170 systemd[1]: var-lib-kubelet-pods-c8f6a16f\x2d2528\x2d484c\x2dbd4f\x2d70ba36c490fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmtd68.mount: Deactivated successfully. Sep 8 23:48:20.275227 systemd[1]: var-lib-kubelet-pods-60d368f5\x2ddbf6\x2d4095\x2db4b7\x2d4bd41b0cf789-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vgsr.mount: Deactivated successfully. Sep 8 23:48:20.275282 systemd[1]: var-lib-kubelet-pods-60d368f5\x2ddbf6\x2d4095\x2db4b7\x2d4bd41b0cf789-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:48:20.275332 systemd[1]: var-lib-kubelet-pods-60d368f5\x2ddbf6\x2d4095\x2db4b7\x2d4bd41b0cf789-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:48:20.904727 kubelet[2562]: I0908 23:48:20.904676 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d368f5-dbf6-4095-b4b7-4bd41b0cf789" path="/var/lib/kubelet/pods/60d368f5-dbf6-4095-b4b7-4bd41b0cf789/volumes" Sep 8 23:48:20.905206 kubelet[2562]: I0908 23:48:20.905171 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f6a16f-2528-484c-bd4f-70ba36c490fb" path="/var/lib/kubelet/pods/c8f6a16f-2528-484c-bd4f-70ba36c490fb/volumes" Sep 8 23:48:21.182561 sshd[4268]: Connection closed by 10.0.0.1 port 51972 Sep 8 23:48:21.183120 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:21.197793 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:51972.service: Deactivated successfully. Sep 8 23:48:21.200257 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:48:21.201506 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:48:21.202786 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:54550.service - OpenSSH per-connection server daemon (10.0.0.1:54550). Sep 8 23:48:21.203550 systemd-logind[1456]: Removed session 25. Sep 8 23:48:21.244241 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 54550 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:21.245512 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:21.249653 systemd-logind[1456]: New session 26 of user core. Sep 8 23:48:21.258734 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:48:21.954199 kubelet[2562]: E0908 23:48:21.954163 2562 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:48:22.435479 sshd[4431]: Connection closed by 10.0.0.1 port 54550 Sep 8 23:48:22.435847 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:22.448949 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:54550.service: Deactivated successfully. Sep 8 23:48:22.451144 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:48:22.451397 systemd[1]: session-26.scope: Consumed 1.085s CPU time, 26.5M memory peak. Sep 8 23:48:22.452306 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:48:22.462503 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:54562.service - OpenSSH per-connection server daemon (10.0.0.1:54562). Sep 8 23:48:22.466107 systemd-logind[1456]: Removed session 26. Sep 8 23:48:22.477612 systemd[1]: Created slice kubepods-burstable-podf0ea2d69_cdb1_4bb0_8e10_ad65bde7f127.slice - libcontainer container kubepods-burstable-podf0ea2d69_cdb1_4bb0_8e10_ad65bde7f127.slice. Sep 8 23:48:22.510028 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 54562 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:22.511222 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:22.515740 systemd-logind[1456]: New session 27 of user core. Sep 8 23:48:22.525765 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 8 23:48:22.552550 kubelet[2562]: I0908 23:48:22.552502 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-cni-path\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552550 kubelet[2562]: I0908 23:48:22.552543 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-lib-modules\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552686 kubelet[2562]: I0908 23:48:22.552562 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-cilium-run\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552686 kubelet[2562]: I0908 23:48:22.552578 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-bpf-maps\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552686 kubelet[2562]: I0908 23:48:22.552649 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-xtables-lock\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552764 kubelet[2562]: I0908 23:48:22.552689 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-clustermesh-secrets\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552764 kubelet[2562]: I0908 23:48:22.552706 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7jd\" (UniqueName: \"kubernetes.io/projected/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-kube-api-access-xx7jd\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552764 kubelet[2562]: I0908 23:48:22.552724 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-hostproc\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552764 kubelet[2562]: I0908 23:48:22.552740 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-etc-cni-netd\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552841 kubelet[2562]: I0908 23:48:22.552771 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-hubble-tls\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552841 kubelet[2562]: I0908 23:48:22.552808 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-cilium-config-path\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552841 kubelet[2562]: I0908 23:48:22.552827 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-host-proc-sys-net\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552895 kubelet[2562]: I0908 23:48:22.552852 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-host-proc-sys-kernel\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552895 kubelet[2562]: I0908 23:48:22.552876 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-cilium-cgroup\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.552934 kubelet[2562]: I0908 23:48:22.552897 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127-cilium-ipsec-secrets\") pod \"cilium-srkrq\" (UID: \"f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127\") " pod="kube-system/cilium-srkrq" Sep 8 23:48:22.575422 sshd[4447]: Connection closed by 10.0.0.1 port 54562 Sep 8 23:48:22.575737 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:22.585747 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:54562.service: Deactivated successfully. Sep 8 23:48:22.588106 systemd[1]: session-27.scope: Deactivated successfully. Sep 8 23:48:22.589499 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Sep 8 23:48:22.590579 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:54578.service - OpenSSH per-connection server daemon (10.0.0.1:54578). Sep 8 23:48:22.591351 systemd-logind[1456]: Removed session 27. Sep 8 23:48:22.632215 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 54578 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:48:22.633302 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:22.637096 systemd-logind[1456]: New session 28 of user core. Sep 8 23:48:22.646795 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 8 23:48:22.784244 containerd[1478]: time="2025-09-08T23:48:22.784113745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srkrq,Uid:f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127,Namespace:kube-system,Attempt:0,}" Sep 8 23:48:22.805433 containerd[1478]: time="2025-09-08T23:48:22.805191173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:48:22.805433 containerd[1478]: time="2025-09-08T23:48:22.805247892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:48:22.805433 containerd[1478]: time="2025-09-08T23:48:22.805259612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:22.805433 containerd[1478]: time="2025-09-08T23:48:22.805348291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:48:22.826843 systemd[1]: Started cri-containerd-9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296.scope - libcontainer container 9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296. Sep 8 23:48:22.847131 containerd[1478]: time="2025-09-08T23:48:22.847077230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srkrq,Uid:f0ea2d69-cdb1-4bb0-8e10-ad65bde7f127,Namespace:kube-system,Attempt:0,} returns sandbox id \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\"" Sep 8 23:48:22.853248 containerd[1478]: time="2025-09-08T23:48:22.853164369Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:48:22.861755 containerd[1478]: time="2025-09-08T23:48:22.861698723Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98\"" Sep 8 23:48:22.862190 containerd[1478]: time="2025-09-08T23:48:22.862167718Z" level=info msg="StartContainer for \"6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98\"" Sep 8 23:48:22.884757 systemd[1]: Started cri-containerd-6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98.scope - libcontainer container 6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98. Sep 8 23:48:22.910972 containerd[1478]: time="2025-09-08T23:48:22.910911227Z" level=info msg="StartContainer for \"6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98\" returns successfully" Sep 8 23:48:22.919521 systemd[1]: cri-containerd-6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98.scope: Deactivated successfully. Sep 8 23:48:22.948927 containerd[1478]: time="2025-09-08T23:48:22.948865964Z" level=info msg="shim disconnected" id=6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98 namespace=k8s.io Sep 8 23:48:22.948927 containerd[1478]: time="2025-09-08T23:48:22.948923643Z" level=warning msg="cleaning up after shim disconnected" id=6eceb56a7172a1b2620f1083eee6096364e0e913d089dbc1f5101dc7c87dcf98 namespace=k8s.io Sep 8 23:48:22.948927 containerd[1478]: time="2025-09-08T23:48:22.948931963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:23.153560 containerd[1478]: time="2025-09-08T23:48:23.153479450Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:48:23.178470 containerd[1478]: time="2025-09-08T23:48:23.178425963Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6\"" Sep 8 23:48:23.180408 containerd[1478]: time="2025-09-08T23:48:23.180370264Z" level=info msg="StartContainer for \"9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6\"" Sep 8 23:48:23.205811 systemd[1]: Started cri-containerd-9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6.scope - libcontainer container 9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6. Sep 8 23:48:23.229382 containerd[1478]: time="2025-09-08T23:48:23.228953303Z" level=info msg="StartContainer for \"9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6\" returns successfully" Sep 8 23:48:23.235302 systemd[1]: cri-containerd-9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6.scope: Deactivated successfully. Sep 8 23:48:23.256319 containerd[1478]: time="2025-09-08T23:48:23.256246153Z" level=info msg="shim disconnected" id=9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6 namespace=k8s.io Sep 8 23:48:23.256319 containerd[1478]: time="2025-09-08T23:48:23.256306313Z" level=warning msg="cleaning up after shim disconnected" id=9973f14b60b0d4f9ee323181d5c8b53934233e711391eb7bf52c19d9d99128e6 namespace=k8s.io Sep 8 23:48:23.256319 containerd[1478]: time="2025-09-08T23:48:23.256315913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:24.161178 containerd[1478]: time="2025-09-08T23:48:24.161116316Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:48:24.195513 containerd[1478]: time="2025-09-08T23:48:24.195457183Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe\"" Sep 8 23:48:24.196496 containerd[1478]: time="2025-09-08T23:48:24.196459253Z" level=info msg="StartContainer for \"a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe\"" Sep 8 23:48:24.232752 systemd[1]: Started cri-containerd-a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe.scope - libcontainer container a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe. Sep 8 23:48:24.260417 systemd[1]: cri-containerd-a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe.scope: Deactivated successfully. Sep 8 23:48:24.262402 containerd[1478]: time="2025-09-08T23:48:24.262292095Z" level=info msg="StartContainer for \"a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe\" returns successfully" Sep 8 23:48:24.300329 containerd[1478]: time="2025-09-08T23:48:24.300079648Z" level=info msg="shim disconnected" id=a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe namespace=k8s.io Sep 8 23:48:24.300329 containerd[1478]: time="2025-09-08T23:48:24.300146008Z" level=warning msg="cleaning up after shim disconnected" id=a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe namespace=k8s.io Sep 8 23:48:24.300329 containerd[1478]: time="2025-09-08T23:48:24.300153968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:24.657457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8e49d4bc6f44382f72b0031dfe9fada2de15d7a2814f095b8d703db8d6d5cfe-rootfs.mount: Deactivated successfully. Sep 8 23:48:25.163849 containerd[1478]: time="2025-09-08T23:48:25.163799981Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:48:25.192923 containerd[1478]: time="2025-09-08T23:48:25.192866585Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1\"" Sep 8 23:48:25.193645 containerd[1478]: time="2025-09-08T23:48:25.193609018Z" level=info msg="StartContainer for \"696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1\"" Sep 8 23:48:25.238761 systemd[1]: Started cri-containerd-696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1.scope - libcontainer container 696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1. Sep 8 23:48:25.259859 systemd[1]: cri-containerd-696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1.scope: Deactivated successfully. Sep 8 23:48:25.263979 containerd[1478]: time="2025-09-08T23:48:25.263941428Z" level=info msg="StartContainer for \"696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1\" returns successfully" Sep 8 23:48:25.283149 containerd[1478]: time="2025-09-08T23:48:25.283078646Z" level=info msg="shim disconnected" id=696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1 namespace=k8s.io Sep 8 23:48:25.283149 containerd[1478]: time="2025-09-08T23:48:25.283145646Z" level=warning msg="cleaning up after shim disconnected" id=696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1 namespace=k8s.io Sep 8 23:48:25.283149 containerd[1478]: time="2025-09-08T23:48:25.283155326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:48:25.657515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-696861786dcafb83bd6ffb4a1a975e0418b3b3e2da5a07411321f8b994c567b1-rootfs.mount: Deactivated successfully. Sep 8 23:48:26.174284 containerd[1478]: time="2025-09-08T23:48:26.174145558Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:48:26.197839 containerd[1478]: time="2025-09-08T23:48:26.197791218Z" level=info msg="CreateContainer within sandbox \"9adc7047428a86d602b42cf2125867b09fae1635d90ff4931f43054305d63296\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2da4940cde4f26be7623f1907ecdc486192a7b507b50b9207f7fb55ce311968\"" Sep 8 23:48:26.199464 containerd[1478]: time="2025-09-08T23:48:26.198512811Z" level=info msg="StartContainer for \"a2da4940cde4f26be7623f1907ecdc486192a7b507b50b9207f7fb55ce311968\"" Sep 8 23:48:26.232772 systemd[1]: Started cri-containerd-a2da4940cde4f26be7623f1907ecdc486192a7b507b50b9207f7fb55ce311968.scope - libcontainer container a2da4940cde4f26be7623f1907ecdc486192a7b507b50b9207f7fb55ce311968. Sep 8 23:48:26.267115 containerd[1478]: time="2025-09-08T23:48:26.267059291Z" level=info msg="StartContainer for \"a2da4940cde4f26be7623f1907ecdc486192a7b507b50b9207f7fb55ce311968\" returns successfully" Sep 8 23:48:26.546992 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 8 23:48:29.416476 systemd-networkd[1381]: lxc_health: Link UP Sep 8 23:48:29.418459 systemd-networkd[1381]: lxc_health: Gained carrier Sep 8 23:48:30.805827 kubelet[2562]: I0908 23:48:30.805173 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-srkrq" podStartSLOduration=8.805155978 podStartE2EDuration="8.805155978s" podCreationTimestamp="2025-09-08 23:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:48:27.191362653 +0000 UTC m=+90.378246486" watchObservedRunningTime="2025-09-08 23:48:30.805155978 +0000 UTC m=+93.992039731" Sep 8 23:48:30.842769 systemd-networkd[1381]: lxc_health: Gained IPv6LL Sep 8 23:48:32.962098 kernel: hrtimer: interrupt took 8540489 ns Sep 8 23:48:35.391021 sshd[4456]: Connection closed by 10.0.0.1 port 54578 Sep 8 23:48:35.391831 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:35.397371 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:54578.service: Deactivated successfully. Sep 8 23:48:35.399462 systemd[1]: session-28.scope: Deactivated successfully. Sep 8 23:48:35.400234 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Sep 8 23:48:35.401303 systemd-logind[1456]: Removed session 28.