Sep 9 00:40:33.858043 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:40:33.858064 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Sep 8 22:48:00 -00 2025 Sep 9 00:40:33.858074 kernel: KASLR enabled Sep 9 00:40:33.858079 kernel: efi: EFI v2.7 by EDK II Sep 9 00:40:33.858085 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 9 00:40:33.858091 kernel: random: crng init done Sep 9 00:40:33.858098 kernel: ACPI: Early table checksum verification disabled Sep 9 00:40:33.858104 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 9 00:40:33.858110 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:40:33.858117 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858123 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858129 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858135 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858141 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858148 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858156 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858162 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858168 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:40:33.858175 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:40:33.858181 kernel: NUMA: Failed to initialise from firmware Sep 9 00:40:33.858188 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:40:33.858194 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Sep 9 00:40:33.858200 kernel: Zone ranges: Sep 9 00:40:33.858206 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:40:33.858212 kernel: DMA32 empty Sep 9 00:40:33.858220 kernel: Normal empty Sep 9 00:40:33.858226 kernel: Movable zone start for each node Sep 9 00:40:33.858232 kernel: Early memory node ranges Sep 9 00:40:33.858238 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 9 00:40:33.858245 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 00:40:33.858251 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 00:40:33.858257 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 00:40:33.858263 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 00:40:33.858270 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 00:40:33.858276 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 00:40:33.858282 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:40:33.858289 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:40:33.858296 kernel: psci: probing for conduit method from ACPI. Sep 9 00:40:33.858302 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:40:33.858309 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:40:33.858317 kernel: psci: Trusted OS migration not required Sep 9 00:40:33.858324 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:40:33.858331 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:40:33.858339 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 00:40:33.858346 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 00:40:33.858352 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:40:33.858359 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:40:33.858366 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:40:33.858373 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:40:33.858380 kernel: CPU features: detected: Spectre-v4 Sep 9 00:40:33.858386 kernel: CPU features: detected: Spectre-BHB Sep 9 00:40:33.858393 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:40:33.858399 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:40:33.858407 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:40:33.858414 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:40:33.858420 kernel: alternatives: applying boot alternatives Sep 9 00:40:33.858428 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:40:33.858435 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:40:33.858442 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:40:33.858448 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:40:33.858455 kernel: Fallback order for Node 0: 0 Sep 9 00:40:33.858461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:40:33.858468 kernel: Policy zone: DMA Sep 9 00:40:33.858475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:40:33.858482 kernel: software IO TLB: area num 4. Sep 9 00:40:33.858489 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 00:40:33.858496 kernel: Memory: 2386408K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185880K reserved, 0K cma-reserved) Sep 9 00:40:33.858503 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:40:33.858510 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:40:33.858517 kernel: rcu: RCU event tracing is enabled. Sep 9 00:40:33.858524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:40:33.858531 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:40:33.858537 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:40:33.858544 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:40:33.858551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:40:33.858559 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:40:33.858566 kernel: GICv3: 256 SPIs implemented Sep 9 00:40:33.858572 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:40:33.858579 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:40:33.858585 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 00:40:33.858592 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:40:33.858599 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:40:33.858605 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:40:33.858612 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:40:33.858619 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 00:40:33.858625 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 00:40:33.858632 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:40:33.858640 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:40:33.858647 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:40:33.858654 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:40:33.858661 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:40:33.858667 kernel: arm-pv: using stolen time PV Sep 9 00:40:33.858682 kernel: Console: colour dummy device 80x25 Sep 9 00:40:33.858689 kernel: ACPI: Core revision 20230628 Sep 9 00:40:33.858697 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:40:33.858703 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:40:33.858710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:40:33.858719 kernel: landlock: Up and running. Sep 9 00:40:33.858725 kernel: SELinux: Initializing. Sep 9 00:40:33.858732 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:40:33.858739 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:40:33.858746 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:40:33.858753 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:40:33.858760 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:40:33.858767 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:40:33.858774 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:40:33.858782 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:40:33.858789 kernel: Remapping and enabling EFI services. Sep 9 00:40:33.858795 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:40:33.858802 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:40:33.858809 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:40:33.858816 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 00:40:33.858823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:40:33.858830 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:40:33.858837 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:40:33.858844 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:40:33.858852 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 00:40:33.858859 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:40:33.858877 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:40:33.858886 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:40:33.858894 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:40:33.858901 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 00:40:33.858908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:40:33.858915 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:40:33.858922 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:40:33.858931 kernel: SMP: Total of 4 processors activated. Sep 9 00:40:33.858938 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:40:33.858945 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:40:33.858953 kernel: CPU features: detected: Common not Private translations Sep 9 00:40:33.858960 kernel: CPU features: detected: CRC32 instructions Sep 9 00:40:33.858967 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 00:40:33.858975 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:40:33.858982 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:40:33.858990 kernel: CPU features: detected: Privileged Access Never Sep 9 00:40:33.858998 kernel: CPU features: detected: RAS Extension Support Sep 9 00:40:33.859005 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:40:33.859012 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:40:33.859019 kernel: alternatives: applying system-wide alternatives Sep 9 00:40:33.859027 kernel: devtmpfs: initialized Sep 9 00:40:33.859034 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:40:33.859041 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:40:33.859049 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:40:33.859057 kernel: SMBIOS 3.0.0 present. Sep 9 00:40:33.859064 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 9 00:40:33.859071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:40:33.859078 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:40:33.859086 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:40:33.859093 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:40:33.859100 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:40:33.859108 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 00:40:33.859116 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:40:33.859123 kernel: cpuidle: using governor menu Sep 9 00:40:33.859130 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:40:33.859137 kernel: ASID allocator initialised with 32768 entries Sep 9 00:40:33.859145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:40:33.859152 kernel: Serial: AMBA PL011 UART driver Sep 9 00:40:33.859159 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 00:40:33.859166 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 00:40:33.859174 kernel: Modules: 509008 pages in range for PLT usage Sep 9 00:40:33.859181 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:40:33.859189 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:40:33.859197 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:40:33.859204 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 00:40:33.859211 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:40:33.859219 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:40:33.859226 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:40:33.859233 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 00:40:33.859241 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:40:33.859248 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:40:33.859256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:40:33.859264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:40:33.859271 kernel: ACPI: Interpreter enabled Sep 9 00:40:33.859278 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:40:33.859285 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:40:33.859293 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:40:33.859300 kernel: printk: console [ttyAMA0] enabled Sep 9 00:40:33.859307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:40:33.859426 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:40:33.859500 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:40:33.859565 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:40:33.859631 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:40:33.859712 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:40:33.859722 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:40:33.859730 kernel: PCI host bridge to bus 0000:00 Sep 9 00:40:33.859798 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:40:33.859858 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:40:33.859927 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:40:33.859983 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:40:33.860061 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:40:33.860136 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:40:33.860200 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:40:33.860269 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:40:33.860350 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:40:33.860415 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:40:33.860489 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:40:33.860552 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:40:33.860609 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:40:33.860666 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:40:33.860737 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:40:33.860747 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:40:33.860754 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:40:33.860762 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:40:33.860769 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:40:33.860776 kernel: iommu: Default domain type: Translated Sep 9 00:40:33.860783 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:40:33.860790 kernel: efivars: Registered efivars operations Sep 9 00:40:33.860799 kernel: vgaarb: loaded Sep 9 00:40:33.860806 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:40:33.860814 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:40:33.860821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:40:33.860828 kernel: pnp: PnP ACPI init Sep 9 00:40:33.861005 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:40:33.861020 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:40:33.861028 kernel: NET: Registered PF_INET protocol family Sep 9 00:40:33.861035 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:40:33.861047 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:40:33.861054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:40:33.861067 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:40:33.861074 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:40:33.861081 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:40:33.861088 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:40:33.861096 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:40:33.861103 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:40:33.861112 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:40:33.861119 kernel: kvm [1]: HYP mode not available Sep 9 00:40:33.861127 kernel: Initialise system trusted keyrings Sep 9 00:40:33.861135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:40:33.861142 kernel: Key type asymmetric registered Sep 9 00:40:33.861149 kernel: Asymmetric key parser 'x509' registered Sep 9 00:40:33.861156 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:40:33.861164 kernel: io scheduler mq-deadline registered Sep 9 00:40:33.861171 kernel: io scheduler kyber registered Sep 9 00:40:33.861178 kernel: io scheduler bfq registered Sep 9 00:40:33.861187 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:40:33.861194 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:40:33.861202 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:40:33.861273 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:40:33.861284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:40:33.861291 kernel: thunder_xcv, ver 1.0 Sep 9 00:40:33.861298 kernel: thunder_bgx, ver 1.0 Sep 9 00:40:33.861306 kernel: nicpf, ver 1.0 Sep 9 00:40:33.861313 kernel: nicvf, ver 1.0 Sep 9 00:40:33.861387 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:40:33.861448 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:40:33 UTC (1757378433) Sep 9 00:40:33.861457 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:40:33.861465 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:40:33.861472 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 00:40:33.861480 kernel: watchdog: Hard watchdog permanently disabled Sep 9 00:40:33.861487 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:40:33.861494 kernel: Segment Routing with IPv6 Sep 9 00:40:33.861503 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:40:33.861511 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:40:33.861518 kernel: Key type dns_resolver registered Sep 9 00:40:33.861525 kernel: registered taskstats version 1 Sep 9 00:40:33.861532 kernel: Loading compiled-in X.509 certificates Sep 9 00:40:33.861540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: f5b097e6797722e0cc665195a3c415b6be267631' Sep 9 00:40:33.861547 kernel: Key type .fscrypt registered Sep 9 00:40:33.861554 kernel: Key type fscrypt-provisioning registered Sep 9 00:40:33.861561 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:40:33.861570 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:40:33.861577 kernel: ima: No architecture policies found Sep 9 00:40:33.861585 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:40:33.861592 kernel: clk: Disabling unused clocks Sep 9 00:40:33.861599 kernel: Freeing unused kernel memory: 39424K Sep 9 00:40:33.861606 kernel: Run /init as init process Sep 9 00:40:33.861613 kernel: with arguments: Sep 9 00:40:33.861620 kernel: /init Sep 9 00:40:33.861627 kernel: with environment: Sep 9 00:40:33.861636 kernel: HOME=/ Sep 9 00:40:33.861643 kernel: TERM=linux Sep 9 00:40:33.861650 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:40:33.861659 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:40:33.861669 systemd[1]: Detected virtualization kvm. Sep 9 00:40:33.861687 systemd[1]: Detected architecture arm64. Sep 9 00:40:33.861699 systemd[1]: Running in initrd. Sep 9 00:40:33.861709 systemd[1]: No hostname configured, using default hostname. Sep 9 00:40:33.861717 systemd[1]: Hostname set to . Sep 9 00:40:33.861725 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:40:33.861733 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:40:33.861741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:40:33.861748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:40:33.861757 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:40:33.861765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:40:33.861774 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:40:33.861782 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:40:33.861791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:40:33.861858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:40:33.861873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:40:33.861883 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:40:33.861891 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:40:33.861901 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:40:33.861909 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:40:33.861917 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:40:33.861924 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:40:33.861932 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:40:33.861940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:40:33.861948 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:40:33.861956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:40:33.861964 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:40:33.861973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:40:33.861981 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:40:33.861988 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:40:33.861996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:40:33.862004 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:40:33.862012 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:40:33.862019 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:40:33.862027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:40:33.862036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:40:33.862044 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:40:33.862052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:40:33.862060 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:40:33.862089 systemd-journald[237]: Collecting audit messages is disabled. Sep 9 00:40:33.862109 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:40:33.862117 kernel: Bridge firewalling registered Sep 9 00:40:33.862125 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:40:33.862133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:40:33.862143 systemd-journald[237]: Journal started Sep 9 00:40:33.862161 systemd-journald[237]: Runtime Journal (/run/log/journal/fde75829efd34a388571434b6d3fec4d) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:40:33.843794 systemd-modules-load[238]: Inserted module 'overlay' Sep 9 00:40:33.857320 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 9 00:40:33.865403 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:40:33.864816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:40:33.866571 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:40:33.869939 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:40:33.871322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:40:33.874951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:40:33.876304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:40:33.885359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:40:33.886628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:40:33.888753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:40:33.901813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:40:33.903044 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:40:33.905741 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:40:33.918094 dracut-cmdline[281]: dracut-dracut-053 Sep 9 00:40:33.920414 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:40:33.928703 systemd-resolved[276]: Positive Trust Anchors: Sep 9 00:40:33.928720 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:40:33.928751 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:40:33.933774 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 9 00:40:33.934959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:40:33.938729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:40:33.986706 kernel: SCSI subsystem initialized Sep 9 00:40:33.991689 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:40:33.998734 kernel: iscsi: registered transport (tcp) Sep 9 00:40:34.012118 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:40:34.012164 kernel: QLogic iSCSI HBA Driver Sep 9 00:40:34.049685 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:40:34.060815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:40:34.076119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:40:34.076156 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:40:34.076176 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:40:34.120737 kernel: raid6: neonx8 gen() 15514 MB/s Sep 9 00:40:34.137698 kernel: raid6: neonx4 gen() 15568 MB/s Sep 9 00:40:34.154700 kernel: raid6: neonx2 gen() 13252 MB/s Sep 9 00:40:34.171698 kernel: raid6: neonx1 gen() 10529 MB/s Sep 9 00:40:34.188704 kernel: raid6: int64x8 gen() 6956 MB/s Sep 9 00:40:34.205694 kernel: raid6: int64x4 gen() 7346 MB/s Sep 9 00:40:34.222701 kernel: raid6: int64x2 gen() 6017 MB/s Sep 9 00:40:34.239696 kernel: raid6: int64x1 gen() 5053 MB/s Sep 9 00:40:34.239720 kernel: raid6: using algorithm neonx4 gen() 15568 MB/s Sep 9 00:40:34.256711 kernel: raid6: .... xor() 12090 MB/s, rmw enabled Sep 9 00:40:34.256757 kernel: raid6: using neon recovery algorithm Sep 9 00:40:34.261731 kernel: xor: measuring software checksum speed Sep 9 00:40:34.261767 kernel: 8regs : 19764 MB/sec Sep 9 00:40:34.262777 kernel: 32regs : 19245 MB/sec Sep 9 00:40:34.262795 kernel: arm64_neon : 27043 MB/sec Sep 9 00:40:34.262805 kernel: xor: using function: arm64_neon (27043 MB/sec) Sep 9 00:40:34.310707 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:40:34.321045 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:40:34.332814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:40:34.344605 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 9 00:40:34.347736 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:40:34.359809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:40:34.370800 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 9 00:40:34.394983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:40:34.405882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:40:34.444854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:40:34.453098 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:40:34.464720 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:40:34.466064 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:40:34.468981 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:40:34.471932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:40:34.482019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:40:34.493195 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:40:34.498381 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 00:40:34.498533 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:40:34.504906 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:40:34.507713 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:40:34.507737 kernel: GPT:9289727 != 19775487 Sep 9 00:40:34.507747 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:40:34.507756 kernel: GPT:9289727 != 19775487 Sep 9 00:40:34.507765 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:40:34.507774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:40:34.505016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:40:34.509007 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:40:34.510153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:40:34.510275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:40:34.513017 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:40:34.527698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:40:34.534723 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (523) Sep 9 00:40:34.537721 kernel: BTRFS: device fsid 7c1eef97-905d-47ac-bb4a-010204f95541 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) Sep 9 00:40:34.540726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:40:34.545560 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:40:34.550291 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:40:34.557540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:40:34.561484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:40:34.562794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:40:34.579847 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:40:34.582731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:40:34.585791 disk-uuid[549]: Primary Header is updated. Sep 9 00:40:34.585791 disk-uuid[549]: Secondary Entries is updated. Sep 9 00:40:34.585791 disk-uuid[549]: Secondary Header is updated. Sep 9 00:40:34.590728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:40:34.603188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:40:35.596688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:40:35.597639 disk-uuid[551]: The operation has completed successfully. Sep 9 00:40:35.622393 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:40:35.622487 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:40:35.638812 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:40:35.641563 sh[573]: Success Sep 9 00:40:35.650708 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:40:35.693081 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:40:35.694958 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:40:35.695921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:40:35.706432 kernel: BTRFS info (device dm-0): first mount of filesystem 7c1eef97-905d-47ac-bb4a-010204f95541 Sep 9 00:40:35.706463 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:40:35.706473 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:40:35.707972 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:40:35.707991 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:40:35.711937 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:40:35.713245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:40:35.713935 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:40:35.716748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:40:35.726207 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:40:35.726239 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:40:35.726250 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:40:35.728695 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:40:35.735272 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:40:35.736682 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:40:35.743346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:40:35.752855 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:40:35.814752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:40:35.815594 ignition[668]: Ignition 2.19.0 Sep 9 00:40:35.815600 ignition[668]: Stage: fetch-offline Sep 9 00:40:35.815631 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:35.815638 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:35.815802 ignition[668]: parsed url from cmdline: "" Sep 9 00:40:35.815805 ignition[668]: no config URL provided Sep 9 00:40:35.815809 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:40:35.815816 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:40:35.815837 ignition[668]: op(1): [started] loading QEMU firmware config module Sep 9 00:40:35.815843 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:40:35.821732 ignition[668]: op(1): [finished] loading QEMU firmware config module Sep 9 00:40:35.821750 ignition[668]: QEMU firmware config was not found. Ignoring... Sep 9 00:40:35.830823 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:40:35.849603 systemd-networkd[765]: lo: Link UP Sep 9 00:40:35.849614 systemd-networkd[765]: lo: Gained carrier Sep 9 00:40:35.850320 systemd-networkd[765]: Enumeration completed Sep 9 00:40:35.850594 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:40:35.850920 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:40:35.850923 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:40:35.851589 systemd-networkd[765]: eth0: Link UP Sep 9 00:40:35.851592 systemd-networkd[765]: eth0: Gained carrier Sep 9 00:40:35.851598 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:40:35.853267 systemd[1]: Reached target network.target - Network. Sep 9 00:40:35.869718 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:40:35.883714 ignition[668]: parsing config with SHA512: 8dc3790875b51c5328758ab3f136f11bdc457555904828e2c5d6513846fb54b7ccf6b9d664c4bfd8ebd71388a28c6e43c6b4e749562f665e095afab0b71b26ab Sep 9 00:40:35.887940 unknown[668]: fetched base config from "system" Sep 9 00:40:35.887949 unknown[668]: fetched user config from "qemu" Sep 9 00:40:35.888639 ignition[668]: fetch-offline: fetch-offline passed Sep 9 00:40:35.888736 ignition[668]: Ignition finished successfully Sep 9 00:40:35.892710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:40:35.893836 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:40:35.907885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:40:35.918572 ignition[769]: Ignition 2.19.0 Sep 9 00:40:35.918582 ignition[769]: Stage: kargs Sep 9 00:40:35.918766 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:35.918776 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:35.919628 ignition[769]: kargs: kargs passed Sep 9 00:40:35.922350 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:40:35.919672 ignition[769]: Ignition finished successfully Sep 9 00:40:35.937849 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:40:35.947241 ignition[778]: Ignition 2.19.0 Sep 9 00:40:35.947248 ignition[778]: Stage: disks Sep 9 00:40:35.947403 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:35.947412 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:35.948280 ignition[778]: disks: disks passed Sep 9 00:40:35.950733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:40:35.948320 ignition[778]: Ignition finished successfully Sep 9 00:40:35.952704 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:40:35.954172 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:40:35.956115 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:40:35.957603 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:40:35.959601 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:40:35.965850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:40:35.976854 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:40:35.983454 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:40:35.986060 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:40:36.038701 kernel: EXT4-fs (vda9): mounted filesystem d987a4c8-1278-4a59-9d40-0c91e08e9423 r/w with ordered data mode. Quota mode: none. Sep 9 00:40:36.038706 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:40:36.039973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:40:36.059789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:40:36.061668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:40:36.063288 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:40:36.063331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:40:36.063353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:40:36.074953 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 9 00:40:36.074975 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:40:36.074991 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:40:36.075001 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:40:36.070350 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:40:36.072844 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:40:36.081770 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:40:36.082814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:40:36.111686 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:40:36.115829 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:40:36.122699 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:40:36.126540 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:40:36.194379 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:40:36.204811 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:40:36.206330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:40:36.211686 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:40:36.229070 ignition[911]: INFO : Ignition 2.19.0 Sep 9 00:40:36.229070 ignition[911]: INFO : Stage: mount Sep 9 00:40:36.230503 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:36.230503 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:36.230087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:40:36.237462 ignition[911]: INFO : mount: mount passed Sep 9 00:40:36.237462 ignition[911]: INFO : Ignition finished successfully Sep 9 00:40:36.232984 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:40:36.240870 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:40:36.705575 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:40:36.718863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:40:36.725026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (925) Sep 9 00:40:36.725073 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:40:36.725084 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:40:36.726707 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:40:36.728697 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:40:36.729233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:40:36.745718 ignition[942]: INFO : Ignition 2.19.0 Sep 9 00:40:36.745718 ignition[942]: INFO : Stage: files Sep 9 00:40:36.747420 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:36.747420 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:36.747420 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:40:36.750974 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:40:36.750974 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:40:36.753994 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:40:36.755408 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:40:36.755408 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:40:36.754576 unknown[942]: wrote ssh authorized keys file for user: core Sep 9 00:40:36.759225 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:40:36.759225 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 00:40:36.821409 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:40:36.890041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:40:36.890041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:40:36.893834 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:40:37.052035 systemd-networkd[765]: eth0: Gained IPv6LL Sep 9 00:40:37.097829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:40:37.257138 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:40:37.258696 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:40:37.272401 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 00:40:37.656664 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:40:38.082310 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:40:38.082310 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:40:38.086808 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:40:38.111829 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:40:38.111829 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:40:38.111829 ignition[942]: INFO : files: files passed Sep 9 00:40:38.111829 ignition[942]: INFO : Ignition finished successfully Sep 9 00:40:38.115624 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:40:38.129097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:40:38.132452 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:40:38.135835 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:40:38.135974 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:40:38.142625 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:40:38.144699 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:40:38.144699 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:40:38.148381 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:40:38.150451 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:40:38.151859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:40:38.161869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:40:38.181861 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:40:38.181976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:40:38.185057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:40:38.186841 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:40:38.188634 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:40:38.189482 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:40:38.207618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:40:38.223886 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:40:38.231905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:40:38.233219 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:40:38.235102 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:40:38.236717 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:40:38.236840 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:40:38.239170 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:40:38.240947 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:40:38.242617 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:40:38.244343 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:40:38.246275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:40:38.248044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:40:38.249620 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:40:38.251320 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:40:38.253132 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:40:38.254619 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:40:38.256088 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:40:38.256215 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:40:38.258482 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:40:38.260268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:40:38.262216 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:40:38.263727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:40:38.265046 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:40:38.265155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:40:38.267800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:40:38.267922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:40:38.269945 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:40:38.271501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:40:38.271596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:40:38.273611 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:40:38.274975 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:40:38.276545 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:40:38.276633 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:40:38.278443 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:40:38.278520 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:40:38.280008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:40:38.280109 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:40:38.281900 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:40:38.282009 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:40:38.296890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:40:38.298237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:40:38.299113 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:40:38.299228 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:40:38.300999 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:40:38.301094 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:40:38.307061 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:40:38.307148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:40:38.310494 ignition[998]: INFO : Ignition 2.19.0 Sep 9 00:40:38.310494 ignition[998]: INFO : Stage: umount Sep 9 00:40:38.313119 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:40:38.313119 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:40:38.313119 ignition[998]: INFO : umount: umount passed Sep 9 00:40:38.313119 ignition[998]: INFO : Ignition finished successfully Sep 9 00:40:38.313446 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:40:38.313533 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:40:38.315007 systemd[1]: Stopped target network.target - Network. Sep 9 00:40:38.316814 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:40:38.316878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:40:38.318329 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:40:38.318368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:40:38.320024 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:40:38.320072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:40:38.321568 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:40:38.321611 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:40:38.323478 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:40:38.325102 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:40:38.327564 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:40:38.328078 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:40:38.328153 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:40:38.329903 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:40:38.329979 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:40:38.332926 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:40:38.333729 systemd-networkd[765]: eth0: DHCPv6 lease lost Sep 9 00:40:38.333759 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:40:38.336378 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:40:38.336492 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:40:38.339199 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:40:38.339254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:40:38.358802 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:40:38.359630 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:40:38.359712 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:40:38.361733 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:40:38.361780 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:40:38.363784 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:40:38.363838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:40:38.365827 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:40:38.365886 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:40:38.368041 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:40:38.378334 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:40:38.378467 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:40:38.385343 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:40:38.385495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:40:38.387886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:40:38.387925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:40:38.389791 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:40:38.389820 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:40:38.391526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:40:38.391569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:40:38.394265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:40:38.394306 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:40:38.397194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:40:38.397243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:40:38.420857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:40:38.421945 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:40:38.422005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:40:38.424091 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:40:38.424137 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:40:38.426088 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:40:38.426136 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:40:38.428149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:40:38.428191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:40:38.430509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:40:38.431738 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:40:38.434213 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:40:38.436380 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:40:38.447269 systemd[1]: Switching root. Sep 9 00:40:38.464498 systemd-journald[237]: Journal stopped Sep 9 00:40:39.142925 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 9 00:40:39.142988 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:40:39.143000 kernel: SELinux: policy capability open_perms=1 Sep 9 00:40:39.143011 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:40:39.143023 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:40:39.143033 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:40:39.143043 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:40:39.143052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:40:39.143062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:40:39.143071 kernel: audit: type=1403 audit(1757378438.621:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:40:39.143082 systemd[1]: Successfully loaded SELinux policy in 31.586ms. Sep 9 00:40:39.143099 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.790ms. Sep 9 00:40:39.143110 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:40:39.143123 systemd[1]: Detected virtualization kvm. Sep 9 00:40:39.143133 systemd[1]: Detected architecture arm64. Sep 9 00:40:39.143143 systemd[1]: Detected first boot. Sep 9 00:40:39.143153 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:40:39.143164 zram_generator::config[1044]: No configuration found. Sep 9 00:40:39.143175 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:40:39.143185 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:40:39.143197 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:40:39.143209 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:40:39.143220 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:40:39.143230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:40:39.143241 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:40:39.143255 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:40:39.143265 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:40:39.143276 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:40:39.143287 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:40:39.143297 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:40:39.143309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:40:39.143320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:40:39.143330 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:40:39.143341 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:40:39.143351 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:40:39.143362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:40:39.143372 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 00:40:39.143382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:40:39.143395 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:40:39.143407 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:40:39.143418 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:40:39.143430 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:40:39.143440 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:40:39.143450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:40:39.143460 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:40:39.143471 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:40:39.143483 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:40:39.143493 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:40:39.143503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:40:39.143514 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:40:39.143525 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:40:39.143535 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:40:39.143546 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:40:39.143556 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:40:39.143567 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:40:39.143579 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:40:39.143589 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:40:39.143599 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:40:39.143610 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:40:39.143621 systemd[1]: Reached target machines.target - Containers. Sep 9 00:40:39.143632 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:40:39.143642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:40:39.143653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:40:39.143665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:40:39.143685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:40:39.143696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:40:39.143707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:40:39.143718 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:40:39.143728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:40:39.143739 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:40:39.143749 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:40:39.143760 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:40:39.143772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:40:39.143783 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:40:39.143793 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:40:39.143803 kernel: fuse: init (API version 7.39) Sep 9 00:40:39.143813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:40:39.143823 kernel: ACPI: bus type drm_connector registered Sep 9 00:40:39.143832 kernel: loop: module loaded Sep 9 00:40:39.143842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:40:39.143859 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:40:39.143872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:40:39.143883 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:40:39.143895 systemd[1]: Stopped verity-setup.service. Sep 9 00:40:39.143905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:40:39.143915 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:40:39.143944 systemd-journald[1110]: Collecting audit messages is disabled. Sep 9 00:40:39.143972 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:40:39.143985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:40:39.143997 systemd-journald[1110]: Journal started Sep 9 00:40:39.144018 systemd-journald[1110]: Runtime Journal (/run/log/journal/fde75829efd34a388571434b6d3fec4d) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:40:38.965310 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:40:38.979507 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:40:38.979863 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:40:39.147280 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:40:39.147950 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:40:39.149214 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:40:39.150448 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:40:39.151930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:40:39.153405 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:40:39.153537 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:40:39.155020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:40:39.155154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:40:39.156536 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:40:39.156823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:40:39.158126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:40:39.158264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:40:39.159757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:40:39.159892 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:40:39.161378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:40:39.161509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:40:39.162938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:40:39.164295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:40:39.166017 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:40:39.178171 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:40:39.184799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:40:39.186823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:40:39.187924 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:40:39.187955 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:40:39.189876 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:40:39.192016 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:40:39.194132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:40:39.195234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:40:39.196584 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:40:39.198788 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:40:39.199915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:40:39.202824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:40:39.203985 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:40:39.206443 systemd-journald[1110]: Time spent on flushing to /var/log/journal/fde75829efd34a388571434b6d3fec4d is 15.488ms for 857 entries. Sep 9 00:40:39.206443 systemd-journald[1110]: System Journal (/var/log/journal/fde75829efd34a388571434b6d3fec4d) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:40:39.235513 systemd-journald[1110]: Received client request to flush runtime journal. Sep 9 00:40:39.235563 kernel: loop0: detected capacity change from 0 to 114432 Sep 9 00:40:39.235580 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:40:39.207323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:40:39.209904 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:40:39.219284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:40:39.222038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:40:39.223638 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:40:39.225477 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:40:39.228296 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:40:39.233740 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:40:39.241833 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:40:39.246102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:40:39.248972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:40:39.255707 kernel: loop1: detected capacity change from 0 to 114328 Sep 9 00:40:39.260864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:40:39.262028 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 9 00:40:39.262409 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 9 00:40:39.266221 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:40:39.271531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:40:39.274805 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:40:39.279983 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:40:39.286785 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:40:39.288882 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:40:39.294835 kernel: loop2: detected capacity change from 0 to 203944 Sep 9 00:40:39.309718 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:40:39.315907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:40:39.329698 kernel: loop3: detected capacity change from 0 to 114432 Sep 9 00:40:39.331043 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 9 00:40:39.331059 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 9 00:40:39.334987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:40:39.340099 kernel: loop4: detected capacity change from 0 to 114328 Sep 9 00:40:39.344697 kernel: loop5: detected capacity change from 0 to 203944 Sep 9 00:40:39.348961 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:40:39.349359 (sd-merge)[1182]: Merged extensions into '/usr'. Sep 9 00:40:39.354646 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:40:39.354664 systemd[1]: Reloading... Sep 9 00:40:39.412885 zram_generator::config[1212]: No configuration found. Sep 9 00:40:39.471191 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:40:39.491230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:40:39.528179 systemd[1]: Reloading finished in 173 ms. Sep 9 00:40:39.555366 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:40:39.559513 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:40:39.571866 systemd[1]: Starting ensure-sysext.service... Sep 9 00:40:39.573982 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:40:39.579186 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:40:39.579269 systemd[1]: Reloading... Sep 9 00:40:39.590409 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:40:39.590670 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:40:39.591305 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:40:39.591505 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 9 00:40:39.591561 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 9 00:40:39.593796 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:40:39.593810 systemd-tmpfiles[1245]: Skipping /boot Sep 9 00:40:39.600330 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:40:39.600347 systemd-tmpfiles[1245]: Skipping /boot Sep 9 00:40:39.626784 zram_generator::config[1271]: No configuration found. Sep 9 00:40:39.713790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:40:39.750229 systemd[1]: Reloading finished in 170 ms. Sep 9 00:40:39.772708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:40:39.780139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:40:39.787522 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:40:39.789918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:40:39.792540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:40:39.796231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:40:39.799563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:40:39.804396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:40:39.808796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:40:39.810994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:40:39.815551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:40:39.818501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:40:39.819703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:40:39.823348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:40:39.825150 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:40:39.829225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:40:39.830691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:40:39.831874 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Sep 9 00:40:39.832923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:40:39.834977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:40:39.836890 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:40:39.837023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:40:39.847222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:40:39.852935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:40:39.855974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:40:39.858255 augenrules[1339]: No rules Sep 9 00:40:39.860042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:40:39.861740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:40:39.863517 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:40:39.868897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:40:39.871669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:40:39.873713 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:40:39.875331 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:40:39.877108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:40:39.878859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:40:39.880711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:40:39.882832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:40:39.882987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:40:39.886291 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:40:39.886421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:40:39.887976 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:40:39.899738 systemd[1]: Finished ensure-sysext.service. Sep 9 00:40:39.907256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:40:39.915994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:40:39.920652 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:40:39.923524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:40:39.931908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:40:39.933353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:40:39.936907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:40:39.937723 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1355) Sep 9 00:40:39.940023 systemd-resolved[1313]: Positive Trust Anchors: Sep 9 00:40:39.940042 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:40:39.940074 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:40:39.951360 systemd-resolved[1313]: Defaulting to hostname 'linux'. Sep 9 00:40:39.958028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:40:39.959546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:40:39.959987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:40:39.961634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:40:39.963819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:40:39.965261 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:40:39.965422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:40:39.967227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:40:39.967363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:40:39.968751 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:40:39.968909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:40:39.970264 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 00:40:39.995067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:40:39.997131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:40:40.002898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:40:40.004123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:40:40.004192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:40:40.024482 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:40:40.044174 systemd-networkd[1385]: lo: Link UP Sep 9 00:40:40.044187 systemd-networkd[1385]: lo: Gained carrier Sep 9 00:40:40.044927 systemd-networkd[1385]: Enumeration completed Sep 9 00:40:40.047324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:40:40.047689 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:40:40.047694 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:40:40.048329 systemd-networkd[1385]: eth0: Link UP Sep 9 00:40:40.048333 systemd-networkd[1385]: eth0: Gained carrier Sep 9 00:40:40.048346 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:40:40.048980 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:40:40.050486 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:40:40.052338 systemd[1]: Reached target network.target - Network. Sep 9 00:40:40.053423 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:40:40.055836 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:40:40.060111 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:40:40.061434 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Sep 9 00:40:40.062502 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:40:40.062757 systemd-timesyncd[1389]: Initial clock synchronization to Tue 2025-09-09 00:40:40.153934 UTC. Sep 9 00:40:40.065008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:40:40.075956 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:40:40.085340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:40:40.086656 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:40:40.116415 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:40:40.119097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:40:40.120282 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:40:40.121538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:40:40.122862 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:40:40.124256 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:40:40.125476 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:40:40.126804 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:40:40.128080 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:40:40.128123 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:40:40.129094 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:40:40.130956 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:40:40.133481 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:40:40.143732 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:40:40.146039 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:40:40.147656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:40:40.148858 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:40:40.149811 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:40:40.150779 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:40:40.150814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:40:40.151742 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:40:40.153665 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:40:40.156826 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:40:40.155835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:40:40.159862 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:40:40.161851 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:40:40.162962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:40:40.165508 jq[1416]: false Sep 9 00:40:40.165757 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:40:40.167846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:40:40.171630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:40:40.177706 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:40:40.179654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:40:40.180088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:40:40.181890 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:40:40.182983 extend-filesystems[1417]: Found loop3 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found loop4 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found loop5 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found vda Sep 9 00:40:40.182983 extend-filesystems[1417]: Found vda1 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found vda2 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found vda3 Sep 9 00:40:40.182983 extend-filesystems[1417]: Found usr Sep 9 00:40:40.194400 extend-filesystems[1417]: Found vda4 Sep 9 00:40:40.194400 extend-filesystems[1417]: Found vda6 Sep 9 00:40:40.194400 extend-filesystems[1417]: Found vda7 Sep 9 00:40:40.194400 extend-filesystems[1417]: Found vda9 Sep 9 00:40:40.194400 extend-filesystems[1417]: Checking size of /dev/vda9 Sep 9 00:40:40.184370 dbus-daemon[1415]: [system] SELinux support is enabled Sep 9 00:40:40.183699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:40:40.201076 extend-filesystems[1417]: Resized partition /dev/vda9 Sep 9 00:40:40.187227 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:40:40.203868 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:40:40.194705 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:40:40.209102 jq[1428]: true Sep 9 00:40:40.202097 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:40:40.202253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:40:40.202498 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:40:40.202624 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:40:40.205928 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:40:40.206061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:40:40.213700 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:40:40.217814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) Sep 9 00:40:40.222366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:40:40.222403 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:40:40.224233 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:40:40.224252 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:40:40.224550 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:40:40.232712 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:40:40.246515 update_engine[1425]: I20250909 00:40:40.237975 1425 main.cc:92] Flatcar Update Engine starting Sep 9 00:40:40.246515 update_engine[1425]: I20250909 00:40:40.240603 1425 update_check_scheduler.cc:74] Next update check in 4m44s Sep 9 00:40:40.246945 tar[1440]: linux-arm64/helm Sep 9 00:40:40.240526 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:40:40.247152 jq[1441]: true Sep 9 00:40:40.246559 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:40:40.247503 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:40:40.247692 systemd-logind[1423]: New seat seat0. Sep 9 00:40:40.247990 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:40:40.247990 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:40:40.247990 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:40:40.258175 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Sep 9 00:40:40.250798 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:40:40.250991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:40:40.256865 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:40:40.311054 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:40:40.317718 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:40:40.319553 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:40:40.325833 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:40:40.370307 containerd[1447]: time="2025-09-09T00:40:40.370228400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:40:40.396598 containerd[1447]: time="2025-09-09T00:40:40.396407120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.402827 containerd[1447]: time="2025-09-09T00:40:40.402793200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.402894640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.402916440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403058920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403075720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403128320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403141400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403297160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403312440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403324680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403335640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403407360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404001 containerd[1447]: time="2025-09-09T00:40:40.403591400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404263 containerd[1447]: time="2025-09-09T00:40:40.403715200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:40:40.404263 containerd[1447]: time="2025-09-09T00:40:40.403740520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:40:40.404263 containerd[1447]: time="2025-09-09T00:40:40.403828040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:40:40.404263 containerd[1447]: time="2025-09-09T00:40:40.403879120Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:40:40.407227 containerd[1447]: time="2025-09-09T00:40:40.407203520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:40:40.407357 containerd[1447]: time="2025-09-09T00:40:40.407341400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:40:40.407419 containerd[1447]: time="2025-09-09T00:40:40.407407160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:40:40.407470 containerd[1447]: time="2025-09-09T00:40:40.407459680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:40:40.407520 containerd[1447]: time="2025-09-09T00:40:40.407507960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:40:40.407723 containerd[1447]: time="2025-09-09T00:40:40.407703800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:40:40.408020 containerd[1447]: time="2025-09-09T00:40:40.407997360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:40:40.408197 containerd[1447]: time="2025-09-09T00:40:40.408178440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:40:40.408260 containerd[1447]: time="2025-09-09T00:40:40.408247760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:40:40.408309 containerd[1447]: time="2025-09-09T00:40:40.408297840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:40:40.408361 containerd[1447]: time="2025-09-09T00:40:40.408348240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408426 containerd[1447]: time="2025-09-09T00:40:40.408413760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408481 containerd[1447]: time="2025-09-09T00:40:40.408468120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408547 containerd[1447]: time="2025-09-09T00:40:40.408534480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408600 containerd[1447]: time="2025-09-09T00:40:40.408588800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408656 containerd[1447]: time="2025-09-09T00:40:40.408644360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408750 containerd[1447]: time="2025-09-09T00:40:40.408735680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408803 containerd[1447]: time="2025-09-09T00:40:40.408791360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:40:40.408873 containerd[1447]: time="2025-09-09T00:40:40.408859640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.408944 containerd[1447]: time="2025-09-09T00:40:40.408930800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.408995 containerd[1447]: time="2025-09-09T00:40:40.408984120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409055 containerd[1447]: time="2025-09-09T00:40:40.409043520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409107 containerd[1447]: time="2025-09-09T00:40:40.409096040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409177 containerd[1447]: time="2025-09-09T00:40:40.409163880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409225 containerd[1447]: time="2025-09-09T00:40:40.409214560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409276 containerd[1447]: time="2025-09-09T00:40:40.409263120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409344 containerd[1447]: time="2025-09-09T00:40:40.409331080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409403 containerd[1447]: time="2025-09-09T00:40:40.409390280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409455 containerd[1447]: time="2025-09-09T00:40:40.409443720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409507 containerd[1447]: time="2025-09-09T00:40:40.409495360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409573 containerd[1447]: time="2025-09-09T00:40:40.409560280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409642 containerd[1447]: time="2025-09-09T00:40:40.409629200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:40:40.409719 containerd[1447]: time="2025-09-09T00:40:40.409706320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409784 containerd[1447]: time="2025-09-09T00:40:40.409771400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.409833 containerd[1447]: time="2025-09-09T00:40:40.409822560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:40:40.410008 containerd[1447]: time="2025-09-09T00:40:40.409992320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:40:40.410084 containerd[1447]: time="2025-09-09T00:40:40.410062520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:40:40.410134 containerd[1447]: time="2025-09-09T00:40:40.410123320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:40:40.410185 containerd[1447]: time="2025-09-09T00:40:40.410172040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:40:40.410231 containerd[1447]: time="2025-09-09T00:40:40.410219920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.410281 containerd[1447]: time="2025-09-09T00:40:40.410269160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:40:40.410329 containerd[1447]: time="2025-09-09T00:40:40.410318320Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:40:40.410387 containerd[1447]: time="2025-09-09T00:40:40.410375320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:40:40.411602 containerd[1447]: time="2025-09-09T00:40:40.411500680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:40:40.411602 containerd[1447]: time="2025-09-09T00:40:40.411589120Z" level=info msg="Connect containerd service" Sep 9 00:40:40.411761 containerd[1447]: time="2025-09-09T00:40:40.411632960Z" level=info msg="using legacy CRI server" Sep 9 00:40:40.411761 containerd[1447]: time="2025-09-09T00:40:40.411642560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:40:40.411761 containerd[1447]: time="2025-09-09T00:40:40.411746160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:40:40.412474 containerd[1447]: time="2025-09-09T00:40:40.412447720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412689320Z" level=info msg="Start subscribing containerd event" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412738440Z" level=info msg="Start recovering state" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412811880Z" level=info msg="Start event monitor" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412823440Z" level=info msg="Start snapshots syncer" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412845720Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:40:40.413027 containerd[1447]: time="2025-09-09T00:40:40.412860960Z" level=info msg="Start streaming server" Sep 9 00:40:40.413800 containerd[1447]: time="2025-09-09T00:40:40.413776240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:40:40.416696 containerd[1447]: time="2025-09-09T00:40:40.413860000Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:40:40.416696 containerd[1447]: time="2025-09-09T00:40:40.415323400Z" level=info msg="containerd successfully booted in 0.046526s" Sep 9 00:40:40.413988 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:40:40.583980 tar[1440]: linux-arm64/LICENSE Sep 9 00:40:40.584173 tar[1440]: linux-arm64/README.md Sep 9 00:40:40.596826 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:40:40.817542 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:40:40.836067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:40:40.843962 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:40:40.849102 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:40:40.850723 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:40:40.853187 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:40:40.863904 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:40:40.866992 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:40:40.868805 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 00:40:40.870079 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:40:41.852407 systemd-networkd[1385]: eth0: Gained IPv6LL Sep 9 00:40:41.859400 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:40:41.861289 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:40:41.871966 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:40:41.874492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:40:41.876734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:40:41.892886 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:40:41.893100 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:40:41.894982 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:40:41.899601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:40:42.435387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:40:42.437124 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:40:42.439770 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:40:42.441743 systemd[1]: Startup finished in 541ms (kernel) + 4.933s (initrd) + 3.851s (userspace) = 9.326s. Sep 9 00:40:42.797400 kubelet[1528]: E0909 00:40:42.797281 1528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:40:42.799602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:40:42.799787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:40:46.641454 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:40:46.642546 systemd[1]: Started sshd@0-10.0.0.154:22-10.0.0.1:55348.service - OpenSSH per-connection server daemon (10.0.0.1:55348). Sep 9 00:40:46.687623 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 55348 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:46.689380 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:46.697620 systemd-logind[1423]: New session 1 of user core. Sep 9 00:40:46.698607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:40:46.710988 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:40:46.719714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:40:46.721871 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:40:46.727990 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:40:46.799944 systemd[1545]: Queued start job for default target default.target. Sep 9 00:40:46.815577 systemd[1545]: Created slice app.slice - User Application Slice. Sep 9 00:40:46.815607 systemd[1545]: Reached target paths.target - Paths. Sep 9 00:40:46.815618 systemd[1545]: Reached target timers.target - Timers. Sep 9 00:40:46.817462 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:40:46.834040 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:40:46.834142 systemd[1545]: Reached target sockets.target - Sockets. Sep 9 00:40:46.834155 systemd[1545]: Reached target basic.target - Basic System. Sep 9 00:40:46.834188 systemd[1545]: Reached target default.target - Main User Target. Sep 9 00:40:46.834213 systemd[1545]: Startup finished in 101ms. Sep 9 00:40:46.834419 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:40:46.836013 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:40:46.902941 systemd[1]: Started sshd@1-10.0.0.154:22-10.0.0.1:55362.service - OpenSSH per-connection server daemon (10.0.0.1:55362). Sep 9 00:40:46.952531 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 55362 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:46.954155 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:46.961853 systemd-logind[1423]: New session 2 of user core. Sep 9 00:40:46.972082 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:40:47.025852 sshd[1556]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:47.036093 systemd[1]: sshd@1-10.0.0.154:22-10.0.0.1:55362.service: Deactivated successfully. Sep 9 00:40:47.037588 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:40:47.039748 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:40:47.040128 systemd[1]: Started sshd@2-10.0.0.154:22-10.0.0.1:55376.service - OpenSSH per-connection server daemon (10.0.0.1:55376). Sep 9 00:40:47.041534 systemd-logind[1423]: Removed session 2. Sep 9 00:40:47.077211 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 55376 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:47.078412 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:47.081976 systemd-logind[1423]: New session 3 of user core. Sep 9 00:40:47.098556 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:40:47.147523 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:47.158128 systemd[1]: sshd@2-10.0.0.154:22-10.0.0.1:55376.service: Deactivated successfully. Sep 9 00:40:47.160256 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:40:47.162079 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:40:47.163284 systemd[1]: Started sshd@3-10.0.0.154:22-10.0.0.1:55380.service - OpenSSH per-connection server daemon (10.0.0.1:55380). Sep 9 00:40:47.167491 systemd-logind[1423]: Removed session 3. Sep 9 00:40:47.205034 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 55380 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:47.206286 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:47.210433 systemd-logind[1423]: New session 4 of user core. Sep 9 00:40:47.228869 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:40:47.281018 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:47.301049 systemd[1]: sshd@3-10.0.0.154:22-10.0.0.1:55380.service: Deactivated successfully. Sep 9 00:40:47.302346 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:40:47.304446 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:40:47.305504 systemd[1]: Started sshd@4-10.0.0.154:22-10.0.0.1:55386.service - OpenSSH per-connection server daemon (10.0.0.1:55386). Sep 9 00:40:47.306493 systemd-logind[1423]: Removed session 4. Sep 9 00:40:47.346094 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 55386 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:47.347285 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:47.352154 systemd-logind[1423]: New session 5 of user core. Sep 9 00:40:47.358822 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:40:47.415449 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:40:47.415749 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:40:47.428717 sudo[1580]: pam_unix(sudo:session): session closed for user root Sep 9 00:40:47.430541 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:47.449223 systemd[1]: sshd@4-10.0.0.154:22-10.0.0.1:55386.service: Deactivated successfully. Sep 9 00:40:47.450665 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:40:47.454472 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:40:47.463366 systemd[1]: Started sshd@5-10.0.0.154:22-10.0.0.1:55396.service - OpenSSH per-connection server daemon (10.0.0.1:55396). Sep 9 00:40:47.464861 systemd-logind[1423]: Removed session 5. Sep 9 00:40:47.503419 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 55396 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:47.505656 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:47.511508 systemd-logind[1423]: New session 6 of user core. Sep 9 00:40:47.521899 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:40:47.577272 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:40:47.577542 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:40:47.584136 sudo[1589]: pam_unix(sudo:session): session closed for user root Sep 9 00:40:47.589195 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:40:47.589798 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:40:47.606980 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:40:47.608513 auditctl[1592]: No rules Sep 9 00:40:47.608977 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:40:47.609239 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:40:47.615749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:40:47.641165 augenrules[1610]: No rules Sep 9 00:40:47.642417 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:40:47.645359 sudo[1588]: pam_unix(sudo:session): session closed for user root Sep 9 00:40:47.647786 sshd[1585]: pam_unix(sshd:session): session closed for user core Sep 9 00:40:47.661579 systemd[1]: sshd@5-10.0.0.154:22-10.0.0.1:55396.service: Deactivated successfully. Sep 9 00:40:47.662935 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:40:47.664832 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:40:47.667849 systemd[1]: Started sshd@6-10.0.0.154:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Sep 9 00:40:47.669256 systemd-logind[1423]: Removed session 6. Sep 9 00:40:47.719772 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:40:47.721987 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:40:47.728965 systemd-logind[1423]: New session 7 of user core. Sep 9 00:40:47.735900 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:40:47.788112 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:40:47.788388 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:40:48.074588 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:40:48.074975 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:40:48.306919 dockerd[1639]: time="2025-09-09T00:40:48.306852433Z" level=info msg="Starting up" Sep 9 00:40:48.785582 dockerd[1639]: time="2025-09-09T00:40:48.785519060Z" level=info msg="Loading containers: start." Sep 9 00:40:48.884926 kernel: Initializing XFRM netlink socket Sep 9 00:40:48.948303 systemd-networkd[1385]: docker0: Link UP Sep 9 00:40:48.969079 dockerd[1639]: time="2025-09-09T00:40:48.969020455Z" level=info msg="Loading containers: done." Sep 9 00:40:48.982153 dockerd[1639]: time="2025-09-09T00:40:48.982099073Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:40:48.982291 dockerd[1639]: time="2025-09-09T00:40:48.982203613Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:40:48.982317 dockerd[1639]: time="2025-09-09T00:40:48.982305382Z" level=info msg="Daemon has completed initialization" Sep 9 00:40:49.017327 dockerd[1639]: time="2025-09-09T00:40:49.017152617Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:40:49.017418 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:40:49.596366 containerd[1447]: time="2025-09-09T00:40:49.596318984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:40:49.726046 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3954119008-merged.mount: Deactivated successfully. Sep 9 00:40:50.164600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514952084.mount: Deactivated successfully. Sep 9 00:40:51.034500 containerd[1447]: time="2025-09-09T00:40:51.034336542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:51.034965 containerd[1447]: time="2025-09-09T00:40:51.034926949Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 9 00:40:51.035981 containerd[1447]: time="2025-09-09T00:40:51.035954102Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:51.042488 containerd[1447]: time="2025-09-09T00:40:51.041497126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:51.042611 containerd[1447]: time="2025-09-09T00:40:51.042585703Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.446219321s" Sep 9 00:40:51.042699 containerd[1447]: time="2025-09-09T00:40:51.042664378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 00:40:51.044012 containerd[1447]: time="2025-09-09T00:40:51.043986734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:40:52.105073 containerd[1447]: time="2025-09-09T00:40:52.105030395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:52.105920 containerd[1447]: time="2025-09-09T00:40:52.105560682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 9 00:40:52.107703 containerd[1447]: time="2025-09-09T00:40:52.106756036Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:52.111171 containerd[1447]: time="2025-09-09T00:40:52.111138370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:52.112761 containerd[1447]: time="2025-09-09T00:40:52.112721572Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.068699935s" Sep 9 00:40:52.112761 containerd[1447]: time="2025-09-09T00:40:52.112759478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 00:40:52.113246 containerd[1447]: time="2025-09-09T00:40:52.113202400Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:40:53.002301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:40:53.008884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:40:53.123921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:40:53.127797 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:40:53.280000 containerd[1447]: time="2025-09-09T00:40:53.279271028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:53.281308 containerd[1447]: time="2025-09-09T00:40:53.281278148Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 9 00:40:53.282375 containerd[1447]: time="2025-09-09T00:40:53.282347681Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:53.286102 containerd[1447]: time="2025-09-09T00:40:53.286074886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:53.287719 containerd[1447]: time="2025-09-09T00:40:53.287685883Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.174440045s" Sep 9 00:40:53.287789 containerd[1447]: time="2025-09-09T00:40:53.287724505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 00:40:53.288799 containerd[1447]: time="2025-09-09T00:40:53.288773986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:40:53.299873 kubelet[1856]: E0909 00:40:53.299834 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:40:53.302735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:40:53.302972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:40:54.246364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142769256.mount: Deactivated successfully. Sep 9 00:40:54.639939 containerd[1447]: time="2025-09-09T00:40:54.639889879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:54.641062 containerd[1447]: time="2025-09-09T00:40:54.640816084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 9 00:40:54.641992 containerd[1447]: time="2025-09-09T00:40:54.641723642Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:54.643767 containerd[1447]: time="2025-09-09T00:40:54.643720327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:54.644473 containerd[1447]: time="2025-09-09T00:40:54.644440504Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.355631427s" Sep 9 00:40:54.644527 containerd[1447]: time="2025-09-09T00:40:54.644474949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 00:40:54.644937 containerd[1447]: time="2025-09-09T00:40:54.644881471Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:40:55.161435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999770747.mount: Deactivated successfully. Sep 9 00:40:55.759045 containerd[1447]: time="2025-09-09T00:40:55.759004463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:55.761012 containerd[1447]: time="2025-09-09T00:40:55.760988892Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 00:40:55.762066 containerd[1447]: time="2025-09-09T00:40:55.762030942Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:55.765222 containerd[1447]: time="2025-09-09T00:40:55.765196904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:55.767007 containerd[1447]: time="2025-09-09T00:40:55.766977422Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.122069889s" Sep 9 00:40:55.767073 containerd[1447]: time="2025-09-09T00:40:55.767009296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:40:55.767825 containerd[1447]: time="2025-09-09T00:40:55.767802170Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:40:56.208413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273472157.mount: Deactivated successfully. Sep 9 00:40:56.213285 containerd[1447]: time="2025-09-09T00:40:56.213247672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:56.213664 containerd[1447]: time="2025-09-09T00:40:56.213623327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 00:40:56.214488 containerd[1447]: time="2025-09-09T00:40:56.214463469Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:56.216591 containerd[1447]: time="2025-09-09T00:40:56.216546706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:56.217434 containerd[1447]: time="2025-09-09T00:40:56.217409938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.578182ms" Sep 9 00:40:56.217498 containerd[1447]: time="2025-09-09T00:40:56.217440284Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:40:56.217897 containerd[1447]: time="2025-09-09T00:40:56.217877673Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:40:56.724132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227993002.mount: Deactivated successfully. Sep 9 00:40:58.179807 containerd[1447]: time="2025-09-09T00:40:58.179757286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:58.181401 containerd[1447]: time="2025-09-09T00:40:58.181367997Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 9 00:40:58.182748 containerd[1447]: time="2025-09-09T00:40:58.182533619Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:58.185485 containerd[1447]: time="2025-09-09T00:40:58.185440641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:40:58.186883 containerd[1447]: time="2025-09-09T00:40:58.186830410Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.968923797s" Sep 9 00:40:58.186926 containerd[1447]: time="2025-09-09T00:40:58.186888521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 00:41:03.502335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:41:03.513879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:41:03.611806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:03.615295 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:41:03.653433 kubelet[2014]: E0909 00:41:03.653167 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:41:03.656914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:41:03.657189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:41:05.388484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:05.399932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:41:05.420109 systemd[1]: Reloading requested from client PID 2030 ('systemctl') (unit session-7.scope)... Sep 9 00:41:05.420126 systemd[1]: Reloading... Sep 9 00:41:05.481704 zram_generator::config[2069]: No configuration found. Sep 9 00:41:05.681376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:41:05.736017 systemd[1]: Reloading finished in 315 ms. Sep 9 00:41:05.780555 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:41:05.780620 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:41:05.780834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:05.783237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:41:05.882996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:05.887437 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:41:05.920735 kubelet[2115]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:41:05.920735 kubelet[2115]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:41:05.920735 kubelet[2115]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:41:05.921078 kubelet[2115]: I0909 00:41:05.920787 2115 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:41:06.856299 kubelet[2115]: I0909 00:41:06.856016 2115 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:41:06.856299 kubelet[2115]: I0909 00:41:06.856052 2115 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:41:06.856299 kubelet[2115]: I0909 00:41:06.856294 2115 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:41:06.877514 kubelet[2115]: E0909 00:41:06.877462 2115 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:06.879151 kubelet[2115]: I0909 00:41:06.879054 2115 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:41:06.889858 kubelet[2115]: E0909 00:41:06.889816 2115 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:41:06.890111 kubelet[2115]: I0909 00:41:06.890028 2115 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:41:06.893618 kubelet[2115]: I0909 00:41:06.893592 2115 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:41:06.894420 kubelet[2115]: I0909 00:41:06.894382 2115 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:41:06.894558 kubelet[2115]: I0909 00:41:06.894529 2115 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:41:06.894794 kubelet[2115]: I0909 00:41:06.894560 2115 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:41:06.894882 kubelet[2115]: I0909 00:41:06.894852 2115 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:41:06.894882 kubelet[2115]: I0909 00:41:06.894862 2115 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:41:06.895104 kubelet[2115]: I0909 00:41:06.895089 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:41:06.898707 kubelet[2115]: I0909 00:41:06.898207 2115 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:41:06.898707 kubelet[2115]: I0909 00:41:06.898243 2115 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:41:06.898707 kubelet[2115]: I0909 00:41:06.898266 2115 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:41:06.898707 kubelet[2115]: I0909 00:41:06.898342 2115 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:41:06.902197 kubelet[2115]: I0909 00:41:06.902172 2115 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:41:06.903555 kubelet[2115]: I0909 00:41:06.902901 2115 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:41:06.903555 kubelet[2115]: W0909 00:41:06.903074 2115 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:41:06.903555 kubelet[2115]: W0909 00:41:06.903449 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:06.903555 kubelet[2115]: W0909 00:41:06.903465 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:06.903555 kubelet[2115]: E0909 00:41:06.903513 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:06.903555 kubelet[2115]: E0909 00:41:06.903521 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:06.904143 kubelet[2115]: I0909 00:41:06.904121 2115 server.go:1274] "Started kubelet" Sep 9 00:41:06.904612 kubelet[2115]: I0909 00:41:06.904572 2115 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:41:06.907210 kubelet[2115]: I0909 00:41:06.907115 2115 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:41:06.907549 kubelet[2115]: I0909 00:41:06.907402 2115 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:41:06.908859 kubelet[2115]: E0909 00:41:06.907550 2115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.154:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.154:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863766797bd5138 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:41:06.904101176 +0000 UTC m=+1.013004620,LastTimestamp:2025-09-09 00:41:06.904101176 +0000 UTC m=+1.013004620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:41:06.908859 kubelet[2115]: I0909 00:41:06.908644 2115 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:41:06.908859 kubelet[2115]: I0909 00:41:06.908854 2115 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:41:06.909236 kubelet[2115]: I0909 00:41:06.909119 2115 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:41:06.909834 kubelet[2115]: I0909 00:41:06.909751 2115 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:41:06.909908 kubelet[2115]: I0909 00:41:06.909857 2115 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:41:06.909908 kubelet[2115]: E0909 00:41:06.909879 2115 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:41:06.910001 kubelet[2115]: I0909 00:41:06.909921 2115 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:41:06.910694 kubelet[2115]: W0909 00:41:06.910325 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:06.910694 kubelet[2115]: E0909 00:41:06.910373 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:06.911640 kubelet[2115]: I0909 00:41:06.911606 2115 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:41:06.912123 kubelet[2115]: E0909 00:41:06.912083 2115 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:41:06.912326 kubelet[2115]: I0909 00:41:06.912298 2115 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:41:06.912922 kubelet[2115]: E0909 00:41:06.912843 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="200ms" Sep 9 00:41:06.916211 kubelet[2115]: I0909 00:41:06.916176 2115 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:41:06.929483 kubelet[2115]: I0909 00:41:06.929421 2115 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:41:06.929998 kubelet[2115]: I0909 00:41:06.929941 2115 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:41:06.929998 kubelet[2115]: I0909 00:41:06.929955 2115 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:41:06.929998 kubelet[2115]: I0909 00:41:06.929972 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:41:06.930622 kubelet[2115]: I0909 00:41:06.930572 2115 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:41:06.930622 kubelet[2115]: I0909 00:41:06.930598 2115 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:41:06.930622 kubelet[2115]: I0909 00:41:06.930615 2115 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:41:06.930742 kubelet[2115]: E0909 00:41:06.930665 2115 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:41:07.010692 kubelet[2115]: E0909 00:41:07.010614 2115 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:41:07.031090 kubelet[2115]: E0909 00:41:07.031047 2115 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:41:07.033774 kubelet[2115]: I0909 00:41:07.033745 2115 policy_none.go:49] "None policy: Start" Sep 9 00:41:07.034240 kubelet[2115]: W0909 00:41:07.034167 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:07.034314 kubelet[2115]: E0909 00:41:07.034250 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:07.034656 kubelet[2115]: I0909 00:41:07.034636 2115 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:41:07.034714 kubelet[2115]: I0909 00:41:07.034664 2115 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:41:07.042303 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:41:07.059658 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:41:07.072910 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:41:07.074019 kubelet[2115]: I0909 00:41:07.073981 2115 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:41:07.074729 kubelet[2115]: I0909 00:41:07.074182 2115 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:41:07.074729 kubelet[2115]: I0909 00:41:07.074198 2115 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:41:07.074729 kubelet[2115]: I0909 00:41:07.074464 2115 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:41:07.077918 kubelet[2115]: E0909 00:41:07.077802 2115 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:41:07.105120 kubelet[2115]: E0909 00:41:07.105004 2115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.154:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.154:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863766797bd5138 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:41:06.904101176 +0000 UTC m=+1.013004620,LastTimestamp:2025-09-09 00:41:06.904101176 +0000 UTC m=+1.013004620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:41:07.114054 kubelet[2115]: E0909 00:41:07.113919 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="400ms" Sep 9 00:41:07.176498 kubelet[2115]: I0909 00:41:07.176450 2115 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:41:07.176930 kubelet[2115]: E0909 00:41:07.176901 2115 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Sep 9 00:41:07.243520 systemd[1]: Created slice kubepods-burstable-podfb2f3415a0c68c9bfa7911319ec8b57b.slice - libcontainer container kubepods-burstable-podfb2f3415a0c68c9bfa7911319ec8b57b.slice. Sep 9 00:41:07.273563 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 00:41:07.295839 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 00:41:07.313239 kubelet[2115]: I0909 00:41:07.313179 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:07.313239 kubelet[2115]: I0909 00:41:07.313222 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:41:07.313239 kubelet[2115]: I0909 00:41:07.313244 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:07.313428 kubelet[2115]: I0909 00:41:07.313259 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:07.313428 kubelet[2115]: I0909 00:41:07.313279 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:07.313428 kubelet[2115]: I0909 00:41:07.313293 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:07.313428 kubelet[2115]: I0909 00:41:07.313306 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:07.313428 kubelet[2115]: I0909 00:41:07.313322 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:07.313523 kubelet[2115]: I0909 00:41:07.313336 2115 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:07.378774 kubelet[2115]: I0909 00:41:07.378487 2115 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:41:07.378873 kubelet[2115]: E0909 00:41:07.378814 2115 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Sep 9 00:41:07.514651 kubelet[2115]: E0909 00:41:07.514595 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="800ms" Sep 9 00:41:07.571281 kubelet[2115]: E0909 00:41:07.570925 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:07.573994 containerd[1447]: time="2025-09-09T00:41:07.573695188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb2f3415a0c68c9bfa7911319ec8b57b,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:07.593034 kubelet[2115]: E0909 00:41:07.593005 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:07.593481 containerd[1447]: time="2025-09-09T00:41:07.593431654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:07.598858 kubelet[2115]: E0909 00:41:07.598839 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:07.599600 containerd[1447]: time="2025-09-09T00:41:07.599268484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:07.780739 kubelet[2115]: I0909 00:41:07.780706 2115 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:41:07.781049 kubelet[2115]: E0909 00:41:07.781024 2115 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Sep 9 00:41:07.860632 kubelet[2115]: W0909 00:41:07.860549 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:07.860632 kubelet[2115]: E0909 00:41:07.860619 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:08.032188 kubelet[2115]: W0909 00:41:08.031924 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:08.032188 kubelet[2115]: E0909 00:41:08.032033 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:08.140370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356834719.mount: Deactivated successfully. Sep 9 00:41:08.148841 containerd[1447]: time="2025-09-09T00:41:08.148184561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:41:08.149812 containerd[1447]: time="2025-09-09T00:41:08.149783441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:41:08.150960 containerd[1447]: time="2025-09-09T00:41:08.150923742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:41:08.151883 containerd[1447]: time="2025-09-09T00:41:08.151842462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:41:08.152573 containerd[1447]: time="2025-09-09T00:41:08.152448628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 00:41:08.153354 containerd[1447]: time="2025-09-09T00:41:08.153320861Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:41:08.155636 containerd[1447]: time="2025-09-09T00:41:08.155594937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:41:08.156832 containerd[1447]: time="2025-09-09T00:41:08.156791534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:41:08.162359 containerd[1447]: time="2025-09-09T00:41:08.162310017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.798158ms" Sep 9 00:41:08.165253 containerd[1447]: time="2025-09-09T00:41:08.165105014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.446233ms" Sep 9 00:41:08.165775 containerd[1447]: time="2025-09-09T00:41:08.165749699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.974267ms" Sep 9 00:41:08.242064 kubelet[2115]: W0909 00:41:08.241971 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:08.242064 kubelet[2115]: E0909 00:41:08.242038 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:08.257150 containerd[1447]: time="2025-09-09T00:41:08.257007585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:08.257150 containerd[1447]: time="2025-09-09T00:41:08.257059757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:08.257150 containerd[1447]: time="2025-09-09T00:41:08.257083661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.257951 containerd[1447]: time="2025-09-09T00:41:08.257526304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:08.257951 containerd[1447]: time="2025-09-09T00:41:08.257560418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:08.257951 containerd[1447]: time="2025-09-09T00:41:08.257570749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.257951 containerd[1447]: time="2025-09-09T00:41:08.257630889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.258096 containerd[1447]: time="2025-09-09T00:41:08.257157535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.259796 containerd[1447]: time="2025-09-09T00:41:08.259724784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:08.259796 containerd[1447]: time="2025-09-09T00:41:08.259785525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:08.259886 containerd[1447]: time="2025-09-09T00:41:08.259803303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.259920 containerd[1447]: time="2025-09-09T00:41:08.259878378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:08.261943 kubelet[2115]: W0909 00:41:08.261905 2115 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Sep 9 00:41:08.262151 kubelet[2115]: E0909 00:41:08.262132 2115 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.154:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:41:08.277008 systemd[1]: Started cri-containerd-3bb9dbccbace3302383af9f54dea60e886008d200a34675c909619eeb3a3e36f.scope - libcontainer container 3bb9dbccbace3302383af9f54dea60e886008d200a34675c909619eeb3a3e36f. Sep 9 00:41:08.282497 systemd[1]: Started cri-containerd-09443493ba18234ce6d259a8ba925164626e82f4a7cde9bbaddff11aea7556dd.scope - libcontainer container 09443493ba18234ce6d259a8ba925164626e82f4a7cde9bbaddff11aea7556dd. Sep 9 00:41:08.284391 systemd[1]: Started cri-containerd-7aaa795ff48abf0b28c1a5c30b9875071bd5dc8dbe646e6a0ddcaeea8ee9caef.scope - libcontainer container 7aaa795ff48abf0b28c1a5c30b9875071bd5dc8dbe646e6a0ddcaeea8ee9caef. Sep 9 00:41:08.316354 kubelet[2115]: E0909 00:41:08.316037 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="1.6s" Sep 9 00:41:08.325846 containerd[1447]: time="2025-09-09T00:41:08.325805554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"09443493ba18234ce6d259a8ba925164626e82f4a7cde9bbaddff11aea7556dd\"" Sep 9 00:41:08.328859 kubelet[2115]: E0909 00:41:08.327732 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:08.332174 containerd[1447]: time="2025-09-09T00:41:08.332140454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fb2f3415a0c68c9bfa7911319ec8b57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aaa795ff48abf0b28c1a5c30b9875071bd5dc8dbe646e6a0ddcaeea8ee9caef\"" Sep 9 00:41:08.334202 containerd[1447]: time="2025-09-09T00:41:08.333933688Z" level=info msg="CreateContainer within sandbox \"09443493ba18234ce6d259a8ba925164626e82f4a7cde9bbaddff11aea7556dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:41:08.336145 kubelet[2115]: E0909 00:41:08.336118 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:08.338011 containerd[1447]: time="2025-09-09T00:41:08.337980778Z" level=info msg="CreateContainer within sandbox \"7aaa795ff48abf0b28c1a5c30b9875071bd5dc8dbe646e6a0ddcaeea8ee9caef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:41:08.343363 containerd[1447]: time="2025-09-09T00:41:08.343262064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb9dbccbace3302383af9f54dea60e886008d200a34675c909619eeb3a3e36f\"" Sep 9 00:41:08.343866 kubelet[2115]: E0909 00:41:08.343840 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:08.347925 containerd[1447]: time="2025-09-09T00:41:08.347894259Z" level=info msg="CreateContainer within sandbox \"3bb9dbccbace3302383af9f54dea60e886008d200a34675c909619eeb3a3e36f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:41:08.361885 containerd[1447]: time="2025-09-09T00:41:08.361839054Z" level=info msg="CreateContainer within sandbox \"09443493ba18234ce6d259a8ba925164626e82f4a7cde9bbaddff11aea7556dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5db48260b7cf9967f118bd5ee2c62d4c5a5ebaf6ef9361b026f58673a6f92422\"" Sep 9 00:41:08.362375 containerd[1447]: time="2025-09-09T00:41:08.362348925Z" level=info msg="StartContainer for \"5db48260b7cf9967f118bd5ee2c62d4c5a5ebaf6ef9361b026f58673a6f92422\"" Sep 9 00:41:08.366458 containerd[1447]: time="2025-09-09T00:41:08.366412952Z" level=info msg="CreateContainer within sandbox \"7aaa795ff48abf0b28c1a5c30b9875071bd5dc8dbe646e6a0ddcaeea8ee9caef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a653bd47924cb1223badab21166bebca1eb378596dbefb7940f9f8d157443673\"" Sep 9 00:41:08.366876 containerd[1447]: time="2025-09-09T00:41:08.366846986Z" level=info msg="StartContainer for \"a653bd47924cb1223badab21166bebca1eb378596dbefb7940f9f8d157443673\"" Sep 9 00:41:08.374878 containerd[1447]: time="2025-09-09T00:41:08.374833859Z" level=info msg="CreateContainer within sandbox \"3bb9dbccbace3302383af9f54dea60e886008d200a34675c909619eeb3a3e36f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2dc47db9c710cf0ca74043128bf63e3436bedc320bc9fb3ddca819188098f5e7\"" Sep 9 00:41:08.376411 containerd[1447]: time="2025-09-09T00:41:08.375313659Z" level=info msg="StartContainer for \"2dc47db9c710cf0ca74043128bf63e3436bedc320bc9fb3ddca819188098f5e7\"" Sep 9 00:41:08.403882 systemd[1]: Started cri-containerd-5db48260b7cf9967f118bd5ee2c62d4c5a5ebaf6ef9361b026f58673a6f92422.scope - libcontainer container 5db48260b7cf9967f118bd5ee2c62d4c5a5ebaf6ef9361b026f58673a6f92422. Sep 9 00:41:08.404996 systemd[1]: Started cri-containerd-a653bd47924cb1223badab21166bebca1eb378596dbefb7940f9f8d157443673.scope - libcontainer container a653bd47924cb1223badab21166bebca1eb378596dbefb7940f9f8d157443673. Sep 9 00:41:08.409208 systemd[1]: Started cri-containerd-2dc47db9c710cf0ca74043128bf63e3436bedc320bc9fb3ddca819188098f5e7.scope - libcontainer container 2dc47db9c710cf0ca74043128bf63e3436bedc320bc9fb3ddca819188098f5e7. Sep 9 00:41:08.488031 containerd[1447]: time="2025-09-09T00:41:08.487983132Z" level=info msg="StartContainer for \"2dc47db9c710cf0ca74043128bf63e3436bedc320bc9fb3ddca819188098f5e7\" returns successfully" Sep 9 00:41:08.488326 containerd[1447]: time="2025-09-09T00:41:08.488294724Z" level=info msg="StartContainer for \"a653bd47924cb1223badab21166bebca1eb378596dbefb7940f9f8d157443673\" returns successfully" Sep 9 00:41:08.488410 containerd[1447]: time="2025-09-09T00:41:08.488222612Z" level=info msg="StartContainer for \"5db48260b7cf9967f118bd5ee2c62d4c5a5ebaf6ef9361b026f58673a6f92422\" returns successfully" Sep 9 00:41:08.582804 kubelet[2115]: I0909 00:41:08.582702 2115 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:41:08.942906 kubelet[2115]: E0909 00:41:08.941109 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:08.944065 kubelet[2115]: E0909 00:41:08.944041 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:08.945838 kubelet[2115]: E0909 00:41:08.945814 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:09.947494 kubelet[2115]: E0909 00:41:09.947460 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:10.082350 kubelet[2115]: E0909 00:41:10.082307 2115 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:41:10.156754 kubelet[2115]: I0909 00:41:10.156718 2115 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:41:10.156754 kubelet[2115]: E0909 00:41:10.156755 2115 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:41:10.901425 kubelet[2115]: I0909 00:41:10.901386 2115 apiserver.go:52] "Watching apiserver" Sep 9 00:41:10.910356 kubelet[2115]: I0909 00:41:10.910324 2115 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:41:12.334882 systemd[1]: Reloading requested from client PID 2389 ('systemctl') (unit session-7.scope)... Sep 9 00:41:12.334898 systemd[1]: Reloading... Sep 9 00:41:12.393720 zram_generator::config[2428]: No configuration found. Sep 9 00:41:12.480268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:41:12.546499 systemd[1]: Reloading finished in 211 ms. Sep 9 00:41:12.579204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:41:12.595555 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:41:12.595829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:12.595888 systemd[1]: kubelet.service: Consumed 1.354s CPU time, 130.1M memory peak, 0B memory swap peak. Sep 9 00:41:12.606065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:41:12.712265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:41:12.716005 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:41:12.751134 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:41:12.751134 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:41:12.751134 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:41:12.751475 kubelet[2470]: I0909 00:41:12.751172 2470 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:41:12.759998 kubelet[2470]: I0909 00:41:12.759956 2470 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:41:12.759998 kubelet[2470]: I0909 00:41:12.759990 2470 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:41:12.760251 kubelet[2470]: I0909 00:41:12.760232 2470 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:41:12.761530 kubelet[2470]: I0909 00:41:12.761508 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:41:12.763436 kubelet[2470]: I0909 00:41:12.763408 2470 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:41:12.766116 kubelet[2470]: E0909 00:41:12.766092 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:41:12.766116 kubelet[2470]: I0909 00:41:12.766116 2470 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:41:12.769273 kubelet[2470]: I0909 00:41:12.768582 2470 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:41:12.769273 kubelet[2470]: I0909 00:41:12.768733 2470 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:41:12.769273 kubelet[2470]: I0909 00:41:12.768944 2470 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:41:12.769273 kubelet[2470]: I0909 00:41:12.768964 2470 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:41:12.769451 kubelet[2470]: I0909 00:41:12.769223 2470 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:41:12.769451 kubelet[2470]: I0909 00:41:12.769240 2470 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:41:12.769451 kubelet[2470]: I0909 00:41:12.769307 2470 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:41:12.769451 kubelet[2470]: I0909 00:41:12.769410 2470 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:41:12.769537 kubelet[2470]: I0909 00:41:12.769483 2470 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:41:12.769537 kubelet[2470]: I0909 00:41:12.769521 2470 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:41:12.769578 kubelet[2470]: I0909 00:41:12.769540 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:41:12.770618 kubelet[2470]: I0909 00:41:12.770596 2470 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:41:12.771352 kubelet[2470]: I0909 00:41:12.771335 2470 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:41:12.772042 kubelet[2470]: I0909 00:41:12.772026 2470 server.go:1274] "Started kubelet" Sep 9 00:41:12.772690 kubelet[2470]: I0909 00:41:12.772651 2470 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:41:12.773193 kubelet[2470]: I0909 00:41:12.772922 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:41:12.773533 kubelet[2470]: I0909 00:41:12.773517 2470 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:41:12.774562 kubelet[2470]: I0909 00:41:12.774540 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:41:12.776386 kubelet[2470]: I0909 00:41:12.775811 2470 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:41:12.778339 kubelet[2470]: I0909 00:41:12.778312 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:41:12.780691 kubelet[2470]: E0909 00:41:12.780648 2470 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:41:12.782962 kubelet[2470]: I0909 00:41:12.782214 2470 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:41:12.783139 kubelet[2470]: I0909 00:41:12.782228 2470 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:41:12.783326 kubelet[2470]: I0909 00:41:12.783311 2470 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:41:12.784163 kubelet[2470]: I0909 00:41:12.784137 2470 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:41:12.784504 kubelet[2470]: I0909 00:41:12.784483 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:41:12.785503 kubelet[2470]: I0909 00:41:12.785360 2470 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:41:12.790773 kubelet[2470]: I0909 00:41:12.790743 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:41:12.793575 kubelet[2470]: I0909 00:41:12.793551 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:41:12.793873 kubelet[2470]: I0909 00:41:12.793855 2470 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:41:12.793958 kubelet[2470]: I0909 00:41:12.793948 2470 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:41:12.794057 kubelet[2470]: E0909 00:41:12.794038 2470 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:41:12.835855 kubelet[2470]: I0909 00:41:12.835818 2470 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:41:12.835855 kubelet[2470]: I0909 00:41:12.835850 2470 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:41:12.836000 kubelet[2470]: I0909 00:41:12.835870 2470 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:41:12.836117 kubelet[2470]: I0909 00:41:12.836058 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:41:12.836117 kubelet[2470]: I0909 00:41:12.836090 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:41:12.836117 kubelet[2470]: I0909 00:41:12.836110 2470 policy_none.go:49] "None policy: Start" Sep 9 00:41:12.836810 kubelet[2470]: I0909 00:41:12.836796 2470 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:41:12.836914 kubelet[2470]: I0909 00:41:12.836904 2470 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:41:12.837639 kubelet[2470]: I0909 00:41:12.837109 2470 state_mem.go:75] "Updated machine memory state" Sep 9 00:41:12.841244 kubelet[2470]: I0909 00:41:12.841202 2470 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:41:12.841388 kubelet[2470]: I0909 00:41:12.841363 2470 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:41:12.841428 kubelet[2470]: I0909 00:41:12.841383 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:41:12.842069 kubelet[2470]: I0909 00:41:12.841871 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:41:12.945419 kubelet[2470]: I0909 00:41:12.945325 2470 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:41:12.954434 kubelet[2470]: I0909 00:41:12.954405 2470 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:41:12.954541 kubelet[2470]: I0909 00:41:12.954495 2470 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:41:13.084961 kubelet[2470]: I0909 00:41:13.084852 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:13.084961 kubelet[2470]: I0909 00:41:13.084892 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:13.084961 kubelet[2470]: I0909 00:41:13.084915 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.084961 kubelet[2470]: I0909 00:41:13.084939 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:41:13.084961 kubelet[2470]: I0909 00:41:13.084956 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb2f3415a0c68c9bfa7911319ec8b57b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fb2f3415a0c68c9bfa7911319ec8b57b\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:13.085188 kubelet[2470]: I0909 00:41:13.084971 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.085188 kubelet[2470]: I0909 00:41:13.084986 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.085188 kubelet[2470]: I0909 00:41:13.085000 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.085188 kubelet[2470]: I0909 00:41:13.085019 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.204479 kubelet[2470]: E0909 00:41:13.204054 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.204479 kubelet[2470]: E0909 00:41:13.204235 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.204479 kubelet[2470]: E0909 00:41:13.204352 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.324814 sudo[2509]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:41:13.325086 sudo[2509]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:41:13.753188 sudo[2509]: pam_unix(sudo:session): session closed for user root Sep 9 00:41:13.773593 kubelet[2470]: I0909 00:41:13.773489 2470 apiserver.go:52] "Watching apiserver" Sep 9 00:41:13.783770 kubelet[2470]: I0909 00:41:13.783734 2470 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:41:13.817610 kubelet[2470]: E0909 00:41:13.817567 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.827390 kubelet[2470]: E0909 00:41:13.827205 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:41:13.827527 kubelet[2470]: E0909 00:41:13.827504 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:41:13.827690 kubelet[2470]: E0909 00:41:13.827656 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.828104 kubelet[2470]: E0909 00:41:13.828074 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:13.841667 kubelet[2470]: I0909 00:41:13.838966 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.838924731 podStartE2EDuration="1.838924731s" podCreationTimestamp="2025-09-09 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:13.837797274 +0000 UTC m=+1.118911147" watchObservedRunningTime="2025-09-09 00:41:13.838924731 +0000 UTC m=+1.120038604" Sep 9 00:41:13.844433 kubelet[2470]: I0909 00:41:13.844390 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.844377524 podStartE2EDuration="1.844377524s" podCreationTimestamp="2025-09-09 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:13.843593516 +0000 UTC m=+1.124707389" watchObservedRunningTime="2025-09-09 00:41:13.844377524 +0000 UTC m=+1.125491397" Sep 9 00:41:14.819154 kubelet[2470]: E0909 00:41:14.818959 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:14.819154 kubelet[2470]: E0909 00:41:14.819003 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:14.819154 kubelet[2470]: E0909 00:41:14.819095 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:15.483027 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 9 00:41:15.484864 sshd[1618]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:15.487624 systemd[1]: sshd@6-10.0.0.154:22-10.0.0.1:55412.service: Deactivated successfully. Sep 9 00:41:15.489113 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:41:15.489271 systemd[1]: session-7.scope: Consumed 9.294s CPU time, 149.9M memory peak, 0B memory swap peak. Sep 9 00:41:15.491772 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:41:15.492860 systemd-logind[1423]: Removed session 7. Sep 9 00:41:19.644574 kubelet[2470]: I0909 00:41:19.643507 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.643472923 podStartE2EDuration="7.643472923s" podCreationTimestamp="2025-09-09 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:13.851245383 +0000 UTC m=+1.132359256" watchObservedRunningTime="2025-09-09 00:41:19.643472923 +0000 UTC m=+6.924586756" Sep 9 00:41:19.645770 kubelet[2470]: I0909 00:41:19.645750 2470 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:41:19.646233 containerd[1447]: time="2025-09-09T00:41:19.646195064Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:41:19.647224 kubelet[2470]: I0909 00:41:19.647197 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:41:19.654292 systemd[1]: Created slice kubepods-burstable-podbb044a22_e81d_4d65_aca4_4f81a978615e.slice - libcontainer container kubepods-burstable-podbb044a22_e81d_4d65_aca4_4f81a978615e.slice. Sep 9 00:41:19.660653 systemd[1]: Created slice kubepods-besteffort-pod07360a91_0a6a_4b9c_a16d_16ac2e48a997.slice - libcontainer container kubepods-besteffort-pod07360a91_0a6a_4b9c_a16d_16ac2e48a997.slice. Sep 9 00:41:19.733028 kubelet[2470]: I0909 00:41:19.732955 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-hostproc\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733028 kubelet[2470]: I0909 00:41:19.733007 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdg6\" (UniqueName: \"kubernetes.io/projected/07360a91-0a6a-4b9c-a16d-16ac2e48a997-kube-api-access-fwdg6\") pod \"kube-proxy-kcjc6\" (UID: \"07360a91-0a6a-4b9c-a16d-16ac2e48a997\") " pod="kube-system/kube-proxy-kcjc6" Sep 9 00:41:19.733028 kubelet[2470]: I0909 00:41:19.733028 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-cgroup\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733028 kubelet[2470]: I0909 00:41:19.733045 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-etc-cni-netd\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733061 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-kernel\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733076 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07360a91-0a6a-4b9c-a16d-16ac2e48a997-xtables-lock\") pod \"kube-proxy-kcjc6\" (UID: \"07360a91-0a6a-4b9c-a16d-16ac2e48a997\") " pod="kube-system/kube-proxy-kcjc6" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733091 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-bpf-maps\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733105 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhrs\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733120 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cni-path\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733249 kubelet[2470]: I0909 00:41:19.733134 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-run\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733149 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb044a22-e81d-4d65-aca4-4f81a978615e-clustermesh-secrets\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733164 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-lib-modules\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733185 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-hubble-tls\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733201 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07360a91-0a6a-4b9c-a16d-16ac2e48a997-lib-modules\") pod \"kube-proxy-kcjc6\" (UID: \"07360a91-0a6a-4b9c-a16d-16ac2e48a997\") " pod="kube-system/kube-proxy-kcjc6" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733215 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07360a91-0a6a-4b9c-a16d-16ac2e48a997-kube-proxy\") pod \"kube-proxy-kcjc6\" (UID: \"07360a91-0a6a-4b9c-a16d-16ac2e48a997\") " pod="kube-system/kube-proxy-kcjc6" Sep 9 00:41:19.733371 kubelet[2470]: I0909 00:41:19.733230 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-config-path\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733507 kubelet[2470]: I0909 00:41:19.733245 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-net\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.733507 kubelet[2470]: I0909 00:41:19.733261 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-xtables-lock\") pod \"cilium-58pzf\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " pod="kube-system/cilium-58pzf" Sep 9 00:41:19.842461 kubelet[2470]: E0909 00:41:19.842424 2470 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:41:19.842461 kubelet[2470]: E0909 00:41:19.842463 2470 projected.go:194] Error preparing data for projected volume kube-api-access-fwdg6 for pod kube-system/kube-proxy-kcjc6: configmap "kube-root-ca.crt" not found Sep 9 00:41:19.842591 kubelet[2470]: E0909 00:41:19.842518 2470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/07360a91-0a6a-4b9c-a16d-16ac2e48a997-kube-api-access-fwdg6 podName:07360a91-0a6a-4b9c-a16d-16ac2e48a997 nodeName:}" failed. No retries permitted until 2025-09-09 00:41:20.342500344 +0000 UTC m=+7.623614217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fwdg6" (UniqueName: "kubernetes.io/projected/07360a91-0a6a-4b9c-a16d-16ac2e48a997-kube-api-access-fwdg6") pod "kube-proxy-kcjc6" (UID: "07360a91-0a6a-4b9c-a16d-16ac2e48a997") : configmap "kube-root-ca.crt" not found Sep 9 00:41:19.845988 kubelet[2470]: E0909 00:41:19.845969 2470 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:41:19.846050 kubelet[2470]: E0909 00:41:19.845992 2470 projected.go:194] Error preparing data for projected volume kube-api-access-6mhrs for pod kube-system/cilium-58pzf: configmap "kube-root-ca.crt" not found Sep 9 00:41:19.846050 kubelet[2470]: E0909 00:41:19.846029 2470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs podName:bb044a22-e81d-4d65-aca4-4f81a978615e nodeName:}" failed. No retries permitted until 2025-09-09 00:41:20.346017276 +0000 UTC m=+7.627131149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6mhrs" (UniqueName: "kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs") pod "cilium-58pzf" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e") : configmap "kube-root-ca.crt" not found Sep 9 00:41:20.560125 kubelet[2470]: E0909 00:41:20.560055 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:20.560746 containerd[1447]: time="2025-09-09T00:41:20.560712228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58pzf,Uid:bb044a22-e81d-4d65-aca4-4f81a978615e,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:20.569711 kubelet[2470]: E0909 00:41:20.569633 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:20.571639 containerd[1447]: time="2025-09-09T00:41:20.570235503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcjc6,Uid:07360a91-0a6a-4b9c-a16d-16ac2e48a997,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:20.597607 containerd[1447]: time="2025-09-09T00:41:20.597371708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:20.597607 containerd[1447]: time="2025-09-09T00:41:20.597425573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:20.597607 containerd[1447]: time="2025-09-09T00:41:20.597440780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:20.597607 containerd[1447]: time="2025-09-09T00:41:20.597509491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:20.603722 containerd[1447]: time="2025-09-09T00:41:20.602409793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:20.603722 containerd[1447]: time="2025-09-09T00:41:20.602454374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:20.603722 containerd[1447]: time="2025-09-09T00:41:20.602465739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:20.603722 containerd[1447]: time="2025-09-09T00:41:20.602540613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:20.617842 systemd[1]: Started cri-containerd-60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b.scope - libcontainer container 60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b. Sep 9 00:41:20.620528 systemd[1]: Started cri-containerd-feceb0d022f9652e9331fea8d9341eeb674b01239f0ba88bfbfe853e7eb6b18f.scope - libcontainer container feceb0d022f9652e9331fea8d9341eeb674b01239f0ba88bfbfe853e7eb6b18f. Sep 9 00:41:20.640541 containerd[1447]: time="2025-09-09T00:41:20.640502294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58pzf,Uid:bb044a22-e81d-4d65-aca4-4f81a978615e,Namespace:kube-system,Attempt:0,} returns sandbox id \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\"" Sep 9 00:41:20.641200 kubelet[2470]: E0909 00:41:20.641165 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:20.645993 containerd[1447]: time="2025-09-09T00:41:20.645708017Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:41:20.648394 containerd[1447]: time="2025-09-09T00:41:20.648351877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcjc6,Uid:07360a91-0a6a-4b9c-a16d-16ac2e48a997,Namespace:kube-system,Attempt:0,} returns sandbox id \"feceb0d022f9652e9331fea8d9341eeb674b01239f0ba88bfbfe853e7eb6b18f\"" Sep 9 00:41:20.649160 kubelet[2470]: E0909 00:41:20.649143 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:20.651852 containerd[1447]: time="2025-09-09T00:41:20.651736479Z" level=info msg="CreateContainer within sandbox \"feceb0d022f9652e9331fea8d9341eeb674b01239f0ba88bfbfe853e7eb6b18f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:41:20.664628 containerd[1447]: time="2025-09-09T00:41:20.664589492Z" level=info msg="CreateContainer within sandbox \"feceb0d022f9652e9331fea8d9341eeb674b01239f0ba88bfbfe853e7eb6b18f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1487b8141d15c7e9852facaf5277b21191913246bec5e4e8339903477d2b601\"" Sep 9 00:41:20.667025 containerd[1447]: time="2025-09-09T00:41:20.666914205Z" level=info msg="StartContainer for \"d1487b8141d15c7e9852facaf5277b21191913246bec5e4e8339903477d2b601\"" Sep 9 00:41:20.694832 systemd[1]: Started cri-containerd-d1487b8141d15c7e9852facaf5277b21191913246bec5e4e8339903477d2b601.scope - libcontainer container d1487b8141d15c7e9852facaf5277b21191913246bec5e4e8339903477d2b601. Sep 9 00:41:20.718020 containerd[1447]: time="2025-09-09T00:41:20.717904939Z" level=info msg="StartContainer for \"d1487b8141d15c7e9852facaf5277b21191913246bec5e4e8339903477d2b601\" returns successfully" Sep 9 00:41:20.749486 systemd[1]: Created slice kubepods-besteffort-pod989f0d3f_74d1_49f8_bc4d_6b11648f041d.slice - libcontainer container kubepods-besteffort-pod989f0d3f_74d1_49f8_bc4d_6b11648f041d.slice. Sep 9 00:41:20.831770 kubelet[2470]: E0909 00:41:20.831554 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:20.842080 kubelet[2470]: I0909 00:41:20.841770 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kcjc6" podStartSLOduration=1.8416569360000001 podStartE2EDuration="1.841656936s" podCreationTimestamp="2025-09-09 00:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:20.841436114 +0000 UTC m=+8.122549987" watchObservedRunningTime="2025-09-09 00:41:20.841656936 +0000 UTC m=+8.122770929" Sep 9 00:41:20.842080 kubelet[2470]: I0909 00:41:20.841754 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/989f0d3f-74d1-49f8-bc4d-6b11648f041d-cilium-config-path\") pod \"cilium-operator-5d85765b45-h4vrr\" (UID: \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\") " pod="kube-system/cilium-operator-5d85765b45-h4vrr" Sep 9 00:41:20.842080 kubelet[2470]: I0909 00:41:20.841872 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khnlr\" (UniqueName: \"kubernetes.io/projected/989f0d3f-74d1-49f8-bc4d-6b11648f041d-kube-api-access-khnlr\") pod \"cilium-operator-5d85765b45-h4vrr\" (UID: \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\") " pod="kube-system/cilium-operator-5d85765b45-h4vrr" Sep 9 00:41:21.053760 kubelet[2470]: E0909 00:41:21.053727 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:21.054181 containerd[1447]: time="2025-09-09T00:41:21.054133982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h4vrr,Uid:989f0d3f-74d1-49f8-bc4d-6b11648f041d,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:21.076923 containerd[1447]: time="2025-09-09T00:41:21.076824921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:21.076923 containerd[1447]: time="2025-09-09T00:41:21.076874142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:21.076923 containerd[1447]: time="2025-09-09T00:41:21.076884546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:21.077096 containerd[1447]: time="2025-09-09T00:41:21.076968423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:21.103851 systemd[1]: Started cri-containerd-8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb.scope - libcontainer container 8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb. Sep 9 00:41:21.131126 containerd[1447]: time="2025-09-09T00:41:21.131070673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h4vrr,Uid:989f0d3f-74d1-49f8-bc4d-6b11648f041d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\"" Sep 9 00:41:21.131950 kubelet[2470]: E0909 00:41:21.131926 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:22.540807 kubelet[2470]: E0909 00:41:22.540753 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:22.837030 kubelet[2470]: E0909 00:41:22.835983 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:23.409253 kubelet[2470]: E0909 00:41:23.409209 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:24.129439 kubelet[2470]: E0909 00:41:24.129394 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:25.120735 update_engine[1425]: I20250909 00:41:25.120620 1425 update_attempter.cc:509] Updating boot flags... Sep 9 00:41:25.141770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2848) Sep 9 00:41:25.170716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2847) Sep 9 00:41:25.202744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2847) Sep 9 00:41:34.536318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692730593.mount: Deactivated successfully. Sep 9 00:41:35.847031 containerd[1447]: time="2025-09-09T00:41:35.846982426Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:41:35.847708 containerd[1447]: time="2025-09-09T00:41:35.847667706Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 00:41:35.848228 containerd[1447]: time="2025-09-09T00:41:35.848202680Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:41:35.849960 containerd[1447]: time="2025-09-09T00:41:35.849843048Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.204079769s" Sep 9 00:41:35.849960 containerd[1447]: time="2025-09-09T00:41:35.849877774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:41:35.852696 containerd[1447]: time="2025-09-09T00:41:35.852599731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:41:35.854914 containerd[1447]: time="2025-09-09T00:41:35.854887452Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:41:35.872418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942039563.mount: Deactivated successfully. Sep 9 00:41:35.876451 containerd[1447]: time="2025-09-09T00:41:35.876316690Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\"" Sep 9 00:41:35.877007 containerd[1447]: time="2025-09-09T00:41:35.876969564Z" level=info msg="StartContainer for \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\"" Sep 9 00:41:35.904855 systemd[1]: Started cri-containerd-865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72.scope - libcontainer container 865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72. Sep 9 00:41:35.928434 containerd[1447]: time="2025-09-09T00:41:35.928371618Z" level=info msg="StartContainer for \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\" returns successfully" Sep 9 00:41:35.939063 systemd[1]: cri-containerd-865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72.scope: Deactivated successfully. Sep 9 00:41:36.065321 containerd[1447]: time="2025-09-09T00:41:36.060381751Z" level=info msg="shim disconnected" id=865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72 namespace=k8s.io Sep 9 00:41:36.065321 containerd[1447]: time="2025-09-09T00:41:36.065313922Z" level=warning msg="cleaning up after shim disconnected" id=865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72 namespace=k8s.io Sep 9 00:41:36.065321 containerd[1447]: time="2025-09-09T00:41:36.065326964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:41:36.870217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72-rootfs.mount: Deactivated successfully. Sep 9 00:41:36.887784 kubelet[2470]: E0909 00:41:36.887755 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:36.891248 containerd[1447]: time="2025-09-09T00:41:36.890919005Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:41:36.904107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561335350.mount: Deactivated successfully. Sep 9 00:41:36.904869 containerd[1447]: time="2025-09-09T00:41:36.904838854Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\"" Sep 9 00:41:36.906087 containerd[1447]: time="2025-09-09T00:41:36.905997764Z" level=info msg="StartContainer for \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\"" Sep 9 00:41:36.940876 systemd[1]: Started cri-containerd-4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc.scope - libcontainer container 4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc. Sep 9 00:41:36.969526 containerd[1447]: time="2025-09-09T00:41:36.969488482Z" level=info msg="StartContainer for \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\" returns successfully" Sep 9 00:41:36.982316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:41:36.982548 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:41:36.982611 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:41:36.991041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:41:36.991201 systemd[1]: cri-containerd-4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc.scope: Deactivated successfully. Sep 9 00:41:37.014927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:41:37.032250 containerd[1447]: time="2025-09-09T00:41:37.032189836Z" level=info msg="shim disconnected" id=4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc namespace=k8s.io Sep 9 00:41:37.032250 containerd[1447]: time="2025-09-09T00:41:37.032241764Z" level=warning msg="cleaning up after shim disconnected" id=4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc namespace=k8s.io Sep 9 00:41:37.032250 containerd[1447]: time="2025-09-09T00:41:37.032251806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:41:37.266198 containerd[1447]: time="2025-09-09T00:41:37.266117809Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 00:41:37.268526 containerd[1447]: time="2025-09-09T00:41:37.268394080Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.415752502s" Sep 9 00:41:37.268526 containerd[1447]: time="2025-09-09T00:41:37.268438847Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:41:37.270657 containerd[1447]: time="2025-09-09T00:41:37.270517647Z" level=info msg="CreateContainer within sandbox \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:41:37.272469 containerd[1447]: time="2025-09-09T00:41:37.272417180Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:41:37.273181 containerd[1447]: time="2025-09-09T00:41:37.273143292Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:41:37.279721 containerd[1447]: time="2025-09-09T00:41:37.279664617Z" level=info msg="CreateContainer within sandbox \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\"" Sep 9 00:41:37.280427 containerd[1447]: time="2025-09-09T00:41:37.280374006Z" level=info msg="StartContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\"" Sep 9 00:41:37.304830 systemd[1]: Started cri-containerd-66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0.scope - libcontainer container 66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0. Sep 9 00:41:37.328013 containerd[1447]: time="2025-09-09T00:41:37.327955059Z" level=info msg="StartContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" returns successfully" Sep 9 00:41:37.871158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc-rootfs.mount: Deactivated successfully. Sep 9 00:41:37.871252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077723492.mount: Deactivated successfully. Sep 9 00:41:37.892323 kubelet[2470]: E0909 00:41:37.892283 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:37.896059 kubelet[2470]: E0909 00:41:37.896021 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:37.901216 containerd[1447]: time="2025-09-09T00:41:37.901175444Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:41:37.918098 containerd[1447]: time="2025-09-09T00:41:37.918046524Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\"" Sep 9 00:41:37.918636 containerd[1447]: time="2025-09-09T00:41:37.918612371Z" level=info msg="StartContainer for \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\"" Sep 9 00:41:37.921021 kubelet[2470]: I0909 00:41:37.920883 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-h4vrr" podStartSLOduration=1.784287973 podStartE2EDuration="17.920865238s" podCreationTimestamp="2025-09-09 00:41:20 +0000 UTC" firstStartedPulling="2025-09-09 00:41:21.132456913 +0000 UTC m=+8.413570786" lastFinishedPulling="2025-09-09 00:41:37.269034178 +0000 UTC m=+24.550148051" observedRunningTime="2025-09-09 00:41:37.903330936 +0000 UTC m=+25.184444769" watchObservedRunningTime="2025-09-09 00:41:37.920865238 +0000 UTC m=+25.201979151" Sep 9 00:41:37.954841 systemd[1]: Started cri-containerd-61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0.scope - libcontainer container 61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0. Sep 9 00:41:37.989761 systemd[1]: cri-containerd-61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0.scope: Deactivated successfully. Sep 9 00:41:38.047687 containerd[1447]: time="2025-09-09T00:41:38.047591167Z" level=info msg="StartContainer for \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\" returns successfully" Sep 9 00:41:38.068239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0-rootfs.mount: Deactivated successfully. Sep 9 00:41:38.078013 containerd[1447]: time="2025-09-09T00:41:38.077953114Z" level=info msg="shim disconnected" id=61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0 namespace=k8s.io Sep 9 00:41:38.078013 containerd[1447]: time="2025-09-09T00:41:38.078006202Z" level=warning msg="cleaning up after shim disconnected" id=61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0 namespace=k8s.io Sep 9 00:41:38.078013 containerd[1447]: time="2025-09-09T00:41:38.078014763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:41:38.901168 kubelet[2470]: E0909 00:41:38.899694 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:38.901168 kubelet[2470]: E0909 00:41:38.899739 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:38.905109 containerd[1447]: time="2025-09-09T00:41:38.904314034Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:41:38.932861 containerd[1447]: time="2025-09-09T00:41:38.932749582Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\"" Sep 9 00:41:38.934182 containerd[1447]: time="2025-09-09T00:41:38.934131822Z" level=info msg="StartContainer for \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\"" Sep 9 00:41:38.964871 systemd[1]: Started cri-containerd-098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c.scope - libcontainer container 098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c. Sep 9 00:41:38.998569 systemd[1]: cri-containerd-098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c.scope: Deactivated successfully. Sep 9 00:41:39.001499 containerd[1447]: time="2025-09-09T00:41:39.001450062Z" level=info msg="StartContainer for \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\" returns successfully" Sep 9 00:41:39.019182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c-rootfs.mount: Deactivated successfully. Sep 9 00:41:39.023884 containerd[1447]: time="2025-09-09T00:41:39.023831494Z" level=info msg="shim disconnected" id=098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c namespace=k8s.io Sep 9 00:41:39.024177 containerd[1447]: time="2025-09-09T00:41:39.024009838Z" level=warning msg="cleaning up after shim disconnected" id=098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c namespace=k8s.io Sep 9 00:41:39.024177 containerd[1447]: time="2025-09-09T00:41:39.024025320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:41:39.904509 kubelet[2470]: E0909 00:41:39.904450 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:39.909230 containerd[1447]: time="2025-09-09T00:41:39.909040083Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:41:39.953910 containerd[1447]: time="2025-09-09T00:41:39.953823910Z" level=info msg="CreateContainer within sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\"" Sep 9 00:41:39.954722 containerd[1447]: time="2025-09-09T00:41:39.954688347Z" level=info msg="StartContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\"" Sep 9 00:41:39.982922 systemd[1]: Started cri-containerd-8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5.scope - libcontainer container 8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5. Sep 9 00:41:40.005381 containerd[1447]: time="2025-09-09T00:41:40.005332168Z" level=info msg="StartContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" returns successfully" Sep 9 00:41:40.127229 kubelet[2470]: I0909 00:41:40.126702 2470 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:41:40.177581 kubelet[2470]: I0909 00:41:40.177447 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdthf\" (UniqueName: \"kubernetes.io/projected/da57ef37-9a9a-4a76-a838-11e546451d31-kube-api-access-zdthf\") pod \"coredns-7c65d6cfc9-v6nhg\" (UID: \"da57ef37-9a9a-4a76-a838-11e546451d31\") " pod="kube-system/coredns-7c65d6cfc9-v6nhg" Sep 9 00:41:40.177581 kubelet[2470]: I0909 00:41:40.177510 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a1f4465-310d-4486-ac63-afdfaf439f71-config-volume\") pod \"coredns-7c65d6cfc9-nndbl\" (UID: \"5a1f4465-310d-4486-ac63-afdfaf439f71\") " pod="kube-system/coredns-7c65d6cfc9-nndbl" Sep 9 00:41:40.177581 kubelet[2470]: I0909 00:41:40.177532 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dkt\" (UniqueName: \"kubernetes.io/projected/5a1f4465-310d-4486-ac63-afdfaf439f71-kube-api-access-t2dkt\") pod \"coredns-7c65d6cfc9-nndbl\" (UID: \"5a1f4465-310d-4486-ac63-afdfaf439f71\") " pod="kube-system/coredns-7c65d6cfc9-nndbl" Sep 9 00:41:40.177581 kubelet[2470]: I0909 00:41:40.177565 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da57ef37-9a9a-4a76-a838-11e546451d31-config-volume\") pod \"coredns-7c65d6cfc9-v6nhg\" (UID: \"da57ef37-9a9a-4a76-a838-11e546451d31\") " pod="kube-system/coredns-7c65d6cfc9-v6nhg" Sep 9 00:41:40.183033 systemd[1]: Created slice kubepods-burstable-podda57ef37_9a9a_4a76_a838_11e546451d31.slice - libcontainer container kubepods-burstable-podda57ef37_9a9a_4a76_a838_11e546451d31.slice. Sep 9 00:41:40.188163 systemd[1]: Created slice kubepods-burstable-pod5a1f4465_310d_4486_ac63_afdfaf439f71.slice - libcontainer container kubepods-burstable-pod5a1f4465_310d_4486_ac63_afdfaf439f71.slice. Sep 9 00:41:40.487495 kubelet[2470]: E0909 00:41:40.487391 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:40.488369 containerd[1447]: time="2025-09-09T00:41:40.488268538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v6nhg,Uid:da57ef37-9a9a-4a76-a838-11e546451d31,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:40.491701 kubelet[2470]: E0909 00:41:40.491606 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:40.492614 containerd[1447]: time="2025-09-09T00:41:40.492583766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nndbl,Uid:5a1f4465-310d-4486-ac63-afdfaf439f71,Namespace:kube-system,Attempt:0,}" Sep 9 00:41:40.912370 kubelet[2470]: E0909 00:41:40.911777 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:40.929976 kubelet[2470]: I0909 00:41:40.929194 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-58pzf" podStartSLOduration=6.72213531 podStartE2EDuration="21.929165329s" podCreationTimestamp="2025-09-09 00:41:19 +0000 UTC" firstStartedPulling="2025-09-09 00:41:20.645285502 +0000 UTC m=+7.926399375" lastFinishedPulling="2025-09-09 00:41:35.852315521 +0000 UTC m=+23.133429394" observedRunningTime="2025-09-09 00:41:40.928922098 +0000 UTC m=+28.210036051" watchObservedRunningTime="2025-09-09 00:41:40.929165329 +0000 UTC m=+28.210279202" Sep 9 00:41:41.912238 kubelet[2470]: E0909 00:41:41.912189 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:42.034609 systemd-networkd[1385]: cilium_host: Link UP Sep 9 00:41:42.035465 systemd-networkd[1385]: cilium_net: Link UP Sep 9 00:41:42.035624 systemd-networkd[1385]: cilium_net: Gained carrier Sep 9 00:41:42.035777 systemd-networkd[1385]: cilium_host: Gained carrier Sep 9 00:41:42.035886 systemd-networkd[1385]: cilium_net: Gained IPv6LL Sep 9 00:41:42.036758 systemd-networkd[1385]: cilium_host: Gained IPv6LL Sep 9 00:41:42.114050 systemd-networkd[1385]: cilium_vxlan: Link UP Sep 9 00:41:42.114057 systemd-networkd[1385]: cilium_vxlan: Gained carrier Sep 9 00:41:42.383728 kernel: NET: Registered PF_ALG protocol family Sep 9 00:41:42.921293 kubelet[2470]: E0909 00:41:42.920728 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:43.009102 systemd-networkd[1385]: lxc_health: Link UP Sep 9 00:41:43.016118 systemd-networkd[1385]: lxc_health: Gained carrier Sep 9 00:41:43.040974 systemd[1]: Started sshd@7-10.0.0.154:22-10.0.0.1:37942.service - OpenSSH per-connection server daemon (10.0.0.1:37942). Sep 9 00:41:43.085521 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 37942 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:43.087149 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:43.090842 systemd-logind[1423]: New session 8 of user core. Sep 9 00:41:43.098842 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:41:43.226584 sshd[3684]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:43.229547 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:41:43.229746 systemd[1]: sshd@7-10.0.0.154:22-10.0.0.1:37942.service: Deactivated successfully. Sep 9 00:41:43.231450 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:41:43.233106 systemd-logind[1423]: Removed session 8. Sep 9 00:41:43.568813 systemd-networkd[1385]: lxc2e8d9150d118: Link UP Sep 9 00:41:43.576154 systemd-networkd[1385]: lxc37d5a9993a75: Link UP Sep 9 00:41:43.585755 kernel: eth0: renamed from tmpe9182 Sep 9 00:41:43.590723 kernel: eth0: renamed from tmp7554f Sep 9 00:41:43.598153 systemd-networkd[1385]: lxc37d5a9993a75: Gained carrier Sep 9 00:41:43.599930 systemd-networkd[1385]: lxc2e8d9150d118: Gained carrier Sep 9 00:41:43.740818 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Sep 9 00:41:43.916036 kubelet[2470]: E0909 00:41:43.915938 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:44.380223 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 9 00:41:44.891943 systemd-networkd[1385]: lxc2e8d9150d118: Gained IPv6LL Sep 9 00:41:44.920670 kubelet[2470]: E0909 00:41:44.920627 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:45.470726 systemd-networkd[1385]: lxc37d5a9993a75: Gained IPv6LL Sep 9 00:41:45.919742 kubelet[2470]: E0909 00:41:45.919715 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:47.117103 containerd[1447]: time="2025-09-09T00:41:47.116858000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:47.117103 containerd[1447]: time="2025-09-09T00:41:47.116925324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:47.117103 containerd[1447]: time="2025-09-09T00:41:47.116939325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:47.117103 containerd[1447]: time="2025-09-09T00:41:47.117020211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:47.124834 containerd[1447]: time="2025-09-09T00:41:47.124636655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:41:47.124834 containerd[1447]: time="2025-09-09T00:41:47.124709821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:41:47.124834 containerd[1447]: time="2025-09-09T00:41:47.124721301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:47.125154 containerd[1447]: time="2025-09-09T00:41:47.125095247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:41:47.133416 systemd[1]: run-containerd-runc-k8s.io-e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b-runc.Efwf02.mount: Deactivated successfully. Sep 9 00:41:47.150851 systemd[1]: Started cri-containerd-7554f648205c3a903f70ffcc8e059bc2b290d36505d318573dcdade5343247ae.scope - libcontainer container 7554f648205c3a903f70ffcc8e059bc2b290d36505d318573dcdade5343247ae. Sep 9 00:41:47.151926 systemd[1]: Started cri-containerd-e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b.scope - libcontainer container e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b. Sep 9 00:41:47.162225 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:41:47.162960 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:41:47.182026 containerd[1447]: time="2025-09-09T00:41:47.181921560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nndbl,Uid:5a1f4465-310d-4486-ac63-afdfaf439f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b\"" Sep 9 00:41:47.182253 containerd[1447]: time="2025-09-09T00:41:47.182084491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v6nhg,Uid:da57ef37-9a9a-4a76-a838-11e546451d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"7554f648205c3a903f70ffcc8e059bc2b290d36505d318573dcdade5343247ae\"" Sep 9 00:41:47.183042 kubelet[2470]: E0909 00:41:47.182858 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:47.183042 kubelet[2470]: E0909 00:41:47.182872 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:47.186321 containerd[1447]: time="2025-09-09T00:41:47.186257259Z" level=info msg="CreateContainer within sandbox \"e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:41:47.186651 containerd[1447]: time="2025-09-09T00:41:47.186400548Z" level=info msg="CreateContainer within sandbox \"7554f648205c3a903f70ffcc8e059bc2b290d36505d318573dcdade5343247ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:41:47.208915 containerd[1447]: time="2025-09-09T00:41:47.208864815Z" level=info msg="CreateContainer within sandbox \"e9182e00f41954145171066de55c069123da280421e3761f46cf3cd35d9c5b3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c92e3f4af53040d29e95449a8f71f480e06e88b0f115fd55aa294ee1f1edaec\"" Sep 9 00:41:47.209622 containerd[1447]: time="2025-09-09T00:41:47.209590225Z" level=info msg="CreateContainer within sandbox \"7554f648205c3a903f70ffcc8e059bc2b290d36505d318573dcdade5343247ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00080dee5ae23dd3b66f3947d88fddce5c12cf85f128faa7b82ecc2ed849d1c8\"" Sep 9 00:41:47.209987 containerd[1447]: time="2025-09-09T00:41:47.209726915Z" level=info msg="StartContainer for \"6c92e3f4af53040d29e95449a8f71f480e06e88b0f115fd55aa294ee1f1edaec\"" Sep 9 00:41:47.209987 containerd[1447]: time="2025-09-09T00:41:47.209879845Z" level=info msg="StartContainer for \"00080dee5ae23dd3b66f3947d88fddce5c12cf85f128faa7b82ecc2ed849d1c8\"" Sep 9 00:41:47.236821 systemd[1]: Started cri-containerd-6c92e3f4af53040d29e95449a8f71f480e06e88b0f115fd55aa294ee1f1edaec.scope - libcontainer container 6c92e3f4af53040d29e95449a8f71f480e06e88b0f115fd55aa294ee1f1edaec. Sep 9 00:41:47.239395 systemd[1]: Started cri-containerd-00080dee5ae23dd3b66f3947d88fddce5c12cf85f128faa7b82ecc2ed849d1c8.scope - libcontainer container 00080dee5ae23dd3b66f3947d88fddce5c12cf85f128faa7b82ecc2ed849d1c8. Sep 9 00:41:47.261688 containerd[1447]: time="2025-09-09T00:41:47.261593846Z" level=info msg="StartContainer for \"6c92e3f4af53040d29e95449a8f71f480e06e88b0f115fd55aa294ee1f1edaec\" returns successfully" Sep 9 00:41:47.265977 containerd[1447]: time="2025-09-09T00:41:47.265942786Z" level=info msg="StartContainer for \"00080dee5ae23dd3b66f3947d88fddce5c12cf85f128faa7b82ecc2ed849d1c8\" returns successfully" Sep 9 00:41:47.924814 kubelet[2470]: E0909 00:41:47.924776 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:47.928729 kubelet[2470]: E0909 00:41:47.928667 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:47.938830 kubelet[2470]: I0909 00:41:47.938777 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nndbl" podStartSLOduration=27.938761635 podStartE2EDuration="27.938761635s" podCreationTimestamp="2025-09-09 00:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:47.936275464 +0000 UTC m=+35.217389337" watchObservedRunningTime="2025-09-09 00:41:47.938761635 +0000 UTC m=+35.219875468" Sep 9 00:41:47.947337 kubelet[2470]: I0909 00:41:47.947242 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-v6nhg" podStartSLOduration=27.947225378 podStartE2EDuration="27.947225378s" podCreationTimestamp="2025-09-09 00:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:41:47.946483207 +0000 UTC m=+35.227597080" watchObservedRunningTime="2025-09-09 00:41:47.947225378 +0000 UTC m=+35.228339251" Sep 9 00:41:48.240261 systemd[1]: Started sshd@8-10.0.0.154:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). Sep 9 00:41:48.279489 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:48.281191 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:48.284966 systemd-logind[1423]: New session 9 of user core. Sep 9 00:41:48.299865 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:41:48.418699 sshd[3905]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:48.422305 systemd[1]: sshd@8-10.0.0.154:22-10.0.0.1:37954.service: Deactivated successfully. Sep 9 00:41:48.424988 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:41:48.425961 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:41:48.427203 systemd-logind[1423]: Removed session 9. Sep 9 00:41:48.930191 kubelet[2470]: E0909 00:41:48.930145 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:48.930191 kubelet[2470]: E0909 00:41:48.930186 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:49.932258 kubelet[2470]: E0909 00:41:49.931891 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:49.932258 kubelet[2470]: E0909 00:41:49.932001 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:41:53.440246 systemd[1]: Started sshd@9-10.0.0.154:22-10.0.0.1:34714.service - OpenSSH per-connection server daemon (10.0.0.1:34714). Sep 9 00:41:53.481725 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 34714 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:53.483099 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:53.489270 systemd-logind[1423]: New session 10 of user core. Sep 9 00:41:53.498038 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:41:53.634972 sshd[3924]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:53.637699 systemd[1]: sshd@9-10.0.0.154:22-10.0.0.1:34714.service: Deactivated successfully. Sep 9 00:41:53.639246 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:41:53.641141 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:41:53.644248 systemd-logind[1423]: Removed session 10. Sep 9 00:41:58.646245 systemd[1]: Started sshd@10-10.0.0.154:22-10.0.0.1:34716.service - OpenSSH per-connection server daemon (10.0.0.1:34716). Sep 9 00:41:58.693751 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 34716 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:58.693023 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:58.703418 systemd-logind[1423]: New session 11 of user core. Sep 9 00:41:58.711877 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:41:58.844450 sshd[3939]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:58.861969 systemd[1]: sshd@10-10.0.0.154:22-10.0.0.1:34716.service: Deactivated successfully. Sep 9 00:41:58.863354 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:41:58.868557 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:41:58.882167 systemd[1]: Started sshd@11-10.0.0.154:22-10.0.0.1:34718.service - OpenSSH per-connection server daemon (10.0.0.1:34718). Sep 9 00:41:58.883775 systemd-logind[1423]: Removed session 11. Sep 9 00:41:58.918386 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 34718 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:58.919567 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:58.924672 systemd-logind[1423]: New session 12 of user core. Sep 9 00:41:58.935004 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:41:59.089561 sshd[3955]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:59.099383 systemd[1]: sshd@11-10.0.0.154:22-10.0.0.1:34718.service: Deactivated successfully. Sep 9 00:41:59.101372 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:41:59.102233 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:41:59.109975 systemd[1]: Started sshd@12-10.0.0.154:22-10.0.0.1:34720.service - OpenSSH per-connection server daemon (10.0.0.1:34720). Sep 9 00:41:59.113276 systemd-logind[1423]: Removed session 12. Sep 9 00:41:59.161445 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 34720 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:41:59.162823 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:41:59.166743 systemd-logind[1423]: New session 13 of user core. Sep 9 00:41:59.172831 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:41:59.286027 sshd[3967]: pam_unix(sshd:session): session closed for user core Sep 9 00:41:59.290565 systemd[1]: sshd@12-10.0.0.154:22-10.0.0.1:34720.service: Deactivated successfully. Sep 9 00:41:59.292273 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:41:59.293619 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:41:59.294643 systemd-logind[1423]: Removed session 13. Sep 9 00:42:04.296715 systemd[1]: Started sshd@13-10.0.0.154:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Sep 9 00:42:04.346697 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:04.348055 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:04.354302 systemd-logind[1423]: New session 14 of user core. Sep 9 00:42:04.359996 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:42:04.497089 sshd[3982]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:04.501148 systemd[1]: sshd@13-10.0.0.154:22-10.0.0.1:54708.service: Deactivated successfully. Sep 9 00:42:04.504370 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:42:04.505032 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:42:04.505971 systemd-logind[1423]: Removed session 14. Sep 9 00:42:09.507550 systemd[1]: Started sshd@14-10.0.0.154:22-10.0.0.1:54710.service - OpenSSH per-connection server daemon (10.0.0.1:54710). Sep 9 00:42:09.550961 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 54710 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:09.552307 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:09.561419 systemd-logind[1423]: New session 15 of user core. Sep 9 00:42:09.565837 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:42:09.691721 sshd[3997]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:09.707197 systemd[1]: sshd@14-10.0.0.154:22-10.0.0.1:54710.service: Deactivated successfully. Sep 9 00:42:09.708786 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:42:09.710907 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:42:09.719948 systemd[1]: Started sshd@15-10.0.0.154:22-10.0.0.1:54720.service - OpenSSH per-connection server daemon (10.0.0.1:54720). Sep 9 00:42:09.720932 systemd-logind[1423]: Removed session 15. Sep 9 00:42:09.756341 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 54720 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:09.758010 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:09.762760 systemd-logind[1423]: New session 16 of user core. Sep 9 00:42:09.770852 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:42:09.957280 sshd[4011]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:09.968185 systemd[1]: sshd@15-10.0.0.154:22-10.0.0.1:54720.service: Deactivated successfully. Sep 9 00:42:09.970766 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:42:09.973026 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:42:09.974206 systemd[1]: Started sshd@16-10.0.0.154:22-10.0.0.1:37342.service - OpenSSH per-connection server daemon (10.0.0.1:37342). Sep 9 00:42:09.975339 systemd-logind[1423]: Removed session 16. Sep 9 00:42:10.020642 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 37342 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:10.022942 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:10.026954 systemd-logind[1423]: New session 17 of user core. Sep 9 00:42:10.037850 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:42:11.252269 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:11.257743 systemd[1]: sshd@16-10.0.0.154:22-10.0.0.1:37342.service: Deactivated successfully. Sep 9 00:42:11.262213 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:42:11.263030 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:42:11.268415 systemd[1]: Started sshd@17-10.0.0.154:22-10.0.0.1:37356.service - OpenSSH per-connection server daemon (10.0.0.1:37356). Sep 9 00:42:11.269533 systemd-logind[1423]: Removed session 17. Sep 9 00:42:11.313475 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 37356 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:11.314908 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:11.319126 systemd-logind[1423]: New session 18 of user core. Sep 9 00:42:11.324823 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:42:11.545658 sshd[4046]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:11.560186 systemd[1]: sshd@17-10.0.0.154:22-10.0.0.1:37356.service: Deactivated successfully. Sep 9 00:42:11.561819 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:42:11.563663 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:42:11.572173 systemd[1]: Started sshd@18-10.0.0.154:22-10.0.0.1:37358.service - OpenSSH per-connection server daemon (10.0.0.1:37358). Sep 9 00:42:11.572973 systemd-logind[1423]: Removed session 18. Sep 9 00:42:11.607839 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 37358 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:11.609318 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:11.613905 systemd-logind[1423]: New session 19 of user core. Sep 9 00:42:11.620863 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:42:11.731563 sshd[4058]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:11.735301 systemd[1]: sshd@18-10.0.0.154:22-10.0.0.1:37358.service: Deactivated successfully. Sep 9 00:42:11.737462 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:42:11.739076 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:42:11.740736 systemd-logind[1423]: Removed session 19. Sep 9 00:42:16.747553 systemd[1]: Started sshd@19-10.0.0.154:22-10.0.0.1:37362.service - OpenSSH per-connection server daemon (10.0.0.1:37362). Sep 9 00:42:16.785346 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 37362 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:16.786565 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:16.790030 systemd-logind[1423]: New session 20 of user core. Sep 9 00:42:16.800869 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:42:16.930261 sshd[4077]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:16.933077 systemd[1]: sshd@19-10.0.0.154:22-10.0.0.1:37362.service: Deactivated successfully. Sep 9 00:42:16.935345 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:42:16.937410 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:42:16.940369 systemd-logind[1423]: Removed session 20. Sep 9 00:42:21.941104 systemd[1]: Started sshd@20-10.0.0.154:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Sep 9 00:42:21.978128 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:21.979215 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:21.982478 systemd-logind[1423]: New session 21 of user core. Sep 9 00:42:21.998820 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:42:22.104368 sshd[4095]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:22.107504 systemd[1]: sshd@20-10.0.0.154:22-10.0.0.1:50688.service: Deactivated successfully. Sep 9 00:42:22.109375 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:42:22.110014 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:42:22.110844 systemd-logind[1423]: Removed session 21. Sep 9 00:42:24.796636 kubelet[2470]: E0909 00:42:24.795579 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:27.115170 systemd[1]: Started sshd@21-10.0.0.154:22-10.0.0.1:50690.service - OpenSSH per-connection server daemon (10.0.0.1:50690). Sep 9 00:42:27.159455 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 50690 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:27.160324 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:27.164500 systemd-logind[1423]: New session 22 of user core. Sep 9 00:42:27.173824 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:42:27.285067 sshd[4109]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:27.296190 systemd[1]: sshd@21-10.0.0.154:22-10.0.0.1:50690.service: Deactivated successfully. Sep 9 00:42:27.297663 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:42:27.298896 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:42:27.305980 systemd[1]: Started sshd@22-10.0.0.154:22-10.0.0.1:50706.service - OpenSSH per-connection server daemon (10.0.0.1:50706). Sep 9 00:42:27.307089 systemd-logind[1423]: Removed session 22. Sep 9 00:42:27.341021 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 50706 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:27.342152 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:27.345726 systemd-logind[1423]: New session 23 of user core. Sep 9 00:42:27.357895 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:42:28.798735 kubelet[2470]: E0909 00:42:28.796613 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:29.943583 containerd[1447]: time="2025-09-09T00:42:29.943540335Z" level=info msg="StopContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" with timeout 30 (s)" Sep 9 00:42:29.944998 containerd[1447]: time="2025-09-09T00:42:29.944817685Z" level=info msg="Stop container \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" with signal terminated" Sep 9 00:42:29.957162 systemd[1]: cri-containerd-66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0.scope: Deactivated successfully. Sep 9 00:42:29.974273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0-rootfs.mount: Deactivated successfully. Sep 9 00:42:29.983963 containerd[1447]: time="2025-09-09T00:42:29.983763985Z" level=info msg="StopContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" with timeout 2 (s)" Sep 9 00:42:29.984135 containerd[1447]: time="2025-09-09T00:42:29.983784465Z" level=info msg="shim disconnected" id=66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0 namespace=k8s.io Sep 9 00:42:29.984135 containerd[1447]: time="2025-09-09T00:42:29.983996270Z" level=warning msg="cleaning up after shim disconnected" id=66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0 namespace=k8s.io Sep 9 00:42:29.984135 containerd[1447]: time="2025-09-09T00:42:29.984004350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:29.984491 containerd[1447]: time="2025-09-09T00:42:29.984382239Z" level=info msg="Stop container \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" with signal terminated" Sep 9 00:42:29.985538 containerd[1447]: time="2025-09-09T00:42:29.985474866Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:42:29.992938 systemd-networkd[1385]: lxc_health: Link DOWN Sep 9 00:42:29.992945 systemd-networkd[1385]: lxc_health: Lost carrier Sep 9 00:42:30.020129 systemd[1]: cri-containerd-8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5.scope: Deactivated successfully. Sep 9 00:42:30.020389 systemd[1]: cri-containerd-8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5.scope: Consumed 6.214s CPU time. Sep 9 00:42:30.027291 containerd[1447]: time="2025-09-09T00:42:30.027254260Z" level=info msg="StopContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" returns successfully" Sep 9 00:42:30.028492 containerd[1447]: time="2025-09-09T00:42:30.028465609Z" level=info msg="StopPodSandbox for \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\"" Sep 9 00:42:30.028562 containerd[1447]: time="2025-09-09T00:42:30.028502650Z" level=info msg="Container to stop \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.030223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb-shm.mount: Deactivated successfully. Sep 9 00:42:30.037288 systemd[1]: cri-containerd-8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb.scope: Deactivated successfully. Sep 9 00:42:30.041967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5-rootfs.mount: Deactivated successfully. Sep 9 00:42:30.055145 containerd[1447]: time="2025-09-09T00:42:30.055083717Z" level=info msg="shim disconnected" id=8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5 namespace=k8s.io Sep 9 00:42:30.055145 containerd[1447]: time="2025-09-09T00:42:30.055137999Z" level=warning msg="cleaning up after shim disconnected" id=8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5 namespace=k8s.io Sep 9 00:42:30.055145 containerd[1447]: time="2025-09-09T00:42:30.055146759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:30.061714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb-rootfs.mount: Deactivated successfully. Sep 9 00:42:30.064299 containerd[1447]: time="2025-09-09T00:42:30.064113731Z" level=info msg="shim disconnected" id=8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb namespace=k8s.io Sep 9 00:42:30.064299 containerd[1447]: time="2025-09-09T00:42:30.064160732Z" level=warning msg="cleaning up after shim disconnected" id=8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb namespace=k8s.io Sep 9 00:42:30.064299 containerd[1447]: time="2025-09-09T00:42:30.064169812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:30.070447 containerd[1447]: time="2025-09-09T00:42:30.070412959Z" level=info msg="StopContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" returns successfully" Sep 9 00:42:30.070979 containerd[1447]: time="2025-09-09T00:42:30.070954452Z" level=info msg="StopPodSandbox for \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\"" Sep 9 00:42:30.071069 containerd[1447]: time="2025-09-09T00:42:30.070989053Z" level=info msg="Container to stop \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.071069 containerd[1447]: time="2025-09-09T00:42:30.071001293Z" level=info msg="Container to stop \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.071069 containerd[1447]: time="2025-09-09T00:42:30.071010293Z" level=info msg="Container to stop \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.071069 containerd[1447]: time="2025-09-09T00:42:30.071026654Z" level=info msg="Container to stop \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.071069 containerd[1447]: time="2025-09-09T00:42:30.071035814Z" level=info msg="Container to stop \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:42:30.075147 containerd[1447]: time="2025-09-09T00:42:30.075010988Z" level=info msg="TearDown network for sandbox \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\" successfully" Sep 9 00:42:30.075147 containerd[1447]: time="2025-09-09T00:42:30.075036308Z" level=info msg="StopPodSandbox for \"8a075f018ea46f0b599123dc98dad8113c81eb5bf71304b919e249d78f101dbb\" returns successfully" Sep 9 00:42:30.077378 systemd[1]: cri-containerd-60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b.scope: Deactivated successfully. Sep 9 00:42:30.103970 containerd[1447]: time="2025-09-09T00:42:30.103908270Z" level=info msg="shim disconnected" id=60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b namespace=k8s.io Sep 9 00:42:30.103970 containerd[1447]: time="2025-09-09T00:42:30.103964551Z" level=warning msg="cleaning up after shim disconnected" id=60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b namespace=k8s.io Sep 9 00:42:30.103970 containerd[1447]: time="2025-09-09T00:42:30.103973912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:30.114336 containerd[1447]: time="2025-09-09T00:42:30.114284155Z" level=info msg="TearDown network for sandbox \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" successfully" Sep 9 00:42:30.114336 containerd[1447]: time="2025-09-09T00:42:30.114319756Z" level=info msg="StopPodSandbox for \"60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b\" returns successfully" Sep 9 00:42:30.195852 kubelet[2470]: I0909 00:42:30.195720 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-hostproc\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.195852 kubelet[2470]: I0909 00:42:30.195780 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-lib-modules\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.195852 kubelet[2470]: I0909 00:42:30.195811 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mhrs\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.195852 kubelet[2470]: I0909 00:42:30.195834 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb044a22-e81d-4d65-aca4-4f81a978615e-clustermesh-secrets\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.195852 kubelet[2470]: I0909 00:42:30.195851 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-hubble-tls\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195866 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-xtables-lock\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195882 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-cgroup\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195896 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-etc-cni-netd\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195911 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cni-path\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195924 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-net\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196300 kubelet[2470]: I0909 00:42:30.195938 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-bpf-maps\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196436 kubelet[2470]: I0909 00:42:30.195956 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khnlr\" (UniqueName: \"kubernetes.io/projected/989f0d3f-74d1-49f8-bc4d-6b11648f041d-kube-api-access-khnlr\") pod \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\" (UID: \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\") " Sep 9 00:42:30.196436 kubelet[2470]: I0909 00:42:30.195972 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-kernel\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196436 kubelet[2470]: I0909 00:42:30.195988 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-config-path\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196436 kubelet[2470]: I0909 00:42:30.196003 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-run\") pod \"bb044a22-e81d-4d65-aca4-4f81a978615e\" (UID: \"bb044a22-e81d-4d65-aca4-4f81a978615e\") " Sep 9 00:42:30.196436 kubelet[2470]: I0909 00:42:30.196019 2470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/989f0d3f-74d1-49f8-bc4d-6b11648f041d-cilium-config-path\") pod \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\" (UID: \"989f0d3f-74d1-49f8-bc4d-6b11648f041d\") " Sep 9 00:42:30.200721 kubelet[2470]: I0909 00:42:30.199737 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-hostproc" (OuterVolumeSpecName: "hostproc") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.200721 kubelet[2470]: I0909 00:42:30.199804 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.200721 kubelet[2470]: I0909 00:42:30.199821 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.200721 kubelet[2470]: I0909 00:42:30.199835 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.208735 kubelet[2470]: I0909 00:42:30.207437 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/989f0d3f-74d1-49f8-bc4d-6b11648f041d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "989f0d3f-74d1-49f8-bc4d-6b11648f041d" (UID: "989f0d3f-74d1-49f8-bc4d-6b11648f041d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:42:30.208735 kubelet[2470]: I0909 00:42:30.207509 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cni-path" (OuterVolumeSpecName: "cni-path") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.208735 kubelet[2470]: I0909 00:42:30.207526 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.208735 kubelet[2470]: I0909 00:42:30.207541 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.211697 kubelet[2470]: I0909 00:42:30.209725 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.214695 kubelet[2470]: I0909 00:42:30.212821 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.214695 kubelet[2470]: I0909 00:42:30.213066 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:42:30.214695 kubelet[2470]: I0909 00:42:30.213152 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs" (OuterVolumeSpecName: "kube-api-access-6mhrs") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "kube-api-access-6mhrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:42:30.214695 kubelet[2470]: I0909 00:42:30.213649 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:42:30.219740 kubelet[2470]: I0909 00:42:30.219697 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:42:30.221837 kubelet[2470]: I0909 00:42:30.221802 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb044a22-e81d-4d65-aca4-4f81a978615e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bb044a22-e81d-4d65-aca4-4f81a978615e" (UID: "bb044a22-e81d-4d65-aca4-4f81a978615e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:42:30.223068 kubelet[2470]: I0909 00:42:30.223027 2470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989f0d3f-74d1-49f8-bc4d-6b11648f041d-kube-api-access-khnlr" (OuterVolumeSpecName: "kube-api-access-khnlr") pod "989f0d3f-74d1-49f8-bc4d-6b11648f041d" (UID: "989f0d3f-74d1-49f8-bc4d-6b11648f041d"). InnerVolumeSpecName "kube-api-access-khnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:42:30.299103 kubelet[2470]: I0909 00:42:30.299069 2470 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299103 kubelet[2470]: I0909 00:42:30.299099 2470 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299103 kubelet[2470]: I0909 00:42:30.299109 2470 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/989f0d3f-74d1-49f8-bc4d-6b11648f041d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299118 2470 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299127 2470 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299135 2470 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299143 2470 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb044a22-e81d-4d65-aca4-4f81a978615e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299150 2470 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299158 2470 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299166 2470 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mhrs\" (UniqueName: \"kubernetes.io/projected/bb044a22-e81d-4d65-aca4-4f81a978615e-kube-api-access-6mhrs\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299247 kubelet[2470]: I0909 00:42:30.299174 2470 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299407 kubelet[2470]: I0909 00:42:30.299181 2470 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299407 kubelet[2470]: I0909 00:42:30.299189 2470 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299407 kubelet[2470]: I0909 00:42:30.299197 2470 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299407 kubelet[2470]: I0909 00:42:30.299205 2470 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khnlr\" (UniqueName: \"kubernetes.io/projected/989f0d3f-74d1-49f8-bc4d-6b11648f041d-kube-api-access-khnlr\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.299407 kubelet[2470]: I0909 00:42:30.299214 2470 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb044a22-e81d-4d65-aca4-4f81a978615e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:42:30.802280 systemd[1]: Removed slice kubepods-burstable-podbb044a22_e81d_4d65_aca4_4f81a978615e.slice - libcontainer container kubepods-burstable-podbb044a22_e81d_4d65_aca4_4f81a978615e.slice. Sep 9 00:42:30.802472 systemd[1]: kubepods-burstable-podbb044a22_e81d_4d65_aca4_4f81a978615e.slice: Consumed 6.289s CPU time. Sep 9 00:42:30.804008 systemd[1]: Removed slice kubepods-besteffort-pod989f0d3f_74d1_49f8_bc4d_6b11648f041d.slice - libcontainer container kubepods-besteffort-pod989f0d3f_74d1_49f8_bc4d_6b11648f041d.slice. Sep 9 00:42:30.963287 systemd[1]: var-lib-kubelet-pods-989f0d3f\x2d74d1\x2d49f8\x2dbc4d\x2d6b11648f041d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhnlr.mount: Deactivated successfully. Sep 9 00:42:30.963392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b-rootfs.mount: Deactivated successfully. Sep 9 00:42:30.963446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60829bfe0a68a86068a24befdad5d18b409bde488e7f79fe7355149523847c2b-shm.mount: Deactivated successfully. Sep 9 00:42:30.963499 systemd[1]: var-lib-kubelet-pods-bb044a22\x2de81d\x2d4d65\x2daca4\x2d4f81a978615e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mhrs.mount: Deactivated successfully. Sep 9 00:42:30.963560 systemd[1]: var-lib-kubelet-pods-bb044a22\x2de81d\x2d4d65\x2daca4\x2d4f81a978615e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:42:30.963609 systemd[1]: var-lib-kubelet-pods-bb044a22\x2de81d\x2d4d65\x2daca4\x2d4f81a978615e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:42:31.050435 kubelet[2470]: I0909 00:42:31.050407 2470 scope.go:117] "RemoveContainer" containerID="8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5" Sep 9 00:42:31.052288 containerd[1447]: time="2025-09-09T00:42:31.052237796Z" level=info msg="RemoveContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\"" Sep 9 00:42:31.055791 containerd[1447]: time="2025-09-09T00:42:31.055716316Z" level=info msg="RemoveContainer for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" returns successfully" Sep 9 00:42:31.056413 kubelet[2470]: I0909 00:42:31.056292 2470 scope.go:117] "RemoveContainer" containerID="098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c" Sep 9 00:42:31.058522 containerd[1447]: time="2025-09-09T00:42:31.058493020Z" level=info msg="RemoveContainer for \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\"" Sep 9 00:42:31.064484 containerd[1447]: time="2025-09-09T00:42:31.064387117Z" level=info msg="RemoveContainer for \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\" returns successfully" Sep 9 00:42:31.065137 kubelet[2470]: I0909 00:42:31.064748 2470 scope.go:117] "RemoveContainer" containerID="61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0" Sep 9 00:42:31.066141 containerd[1447]: time="2025-09-09T00:42:31.065968793Z" level=info msg="RemoveContainer for \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\"" Sep 9 00:42:31.068659 containerd[1447]: time="2025-09-09T00:42:31.068629655Z" level=info msg="RemoveContainer for \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\" returns successfully" Sep 9 00:42:31.068916 kubelet[2470]: I0909 00:42:31.068840 2470 scope.go:117] "RemoveContainer" containerID="4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc" Sep 9 00:42:31.070524 containerd[1447]: time="2025-09-09T00:42:31.070499538Z" level=info msg="RemoveContainer for \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\"" Sep 9 00:42:31.082561 containerd[1447]: time="2025-09-09T00:42:31.082512456Z" level=info msg="RemoveContainer for \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\" returns successfully" Sep 9 00:42:31.082830 kubelet[2470]: I0909 00:42:31.082745 2470 scope.go:117] "RemoveContainer" containerID="865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72" Sep 9 00:42:31.083847 containerd[1447]: time="2025-09-09T00:42:31.083815606Z" level=info msg="RemoveContainer for \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\"" Sep 9 00:42:31.103104 containerd[1447]: time="2025-09-09T00:42:31.103037610Z" level=info msg="RemoveContainer for \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\" returns successfully" Sep 9 00:42:31.103360 kubelet[2470]: I0909 00:42:31.103337 2470 scope.go:117] "RemoveContainer" containerID="8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5" Sep 9 00:42:31.103591 containerd[1447]: time="2025-09-09T00:42:31.103551862Z" level=error msg="ContainerStatus for \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\": not found" Sep 9 00:42:31.103743 kubelet[2470]: E0909 00:42:31.103717 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\": not found" containerID="8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5" Sep 9 00:42:31.103833 kubelet[2470]: I0909 00:42:31.103748 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5"} err="failed to get container status \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d90908ef0950b5f4d652d3baa7daf4185720749d7d0ea737fd88ee0ba7c15f5\": not found" Sep 9 00:42:31.103833 kubelet[2470]: I0909 00:42:31.103828 2470 scope.go:117] "RemoveContainer" containerID="098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c" Sep 9 00:42:31.113353 containerd[1447]: time="2025-09-09T00:42:31.113292248Z" level=error msg="ContainerStatus for \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\": not found" Sep 9 00:42:31.113555 kubelet[2470]: E0909 00:42:31.113521 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\": not found" containerID="098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c" Sep 9 00:42:31.113594 kubelet[2470]: I0909 00:42:31.113555 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c"} err="failed to get container status \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\": rpc error: code = NotFound desc = an error occurred when try to find container \"098fee6d4dcfe1dd716ff5d52f290dea3f1f39c31414105847c28f99b865741c\": not found" Sep 9 00:42:31.113594 kubelet[2470]: I0909 00:42:31.113581 2470 scope.go:117] "RemoveContainer" containerID="61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0" Sep 9 00:42:31.114252 containerd[1447]: time="2025-09-09T00:42:31.114211589Z" level=error msg="ContainerStatus for \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\": not found" Sep 9 00:42:31.114382 kubelet[2470]: E0909 00:42:31.114350 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\": not found" containerID="61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0" Sep 9 00:42:31.114421 kubelet[2470]: I0909 00:42:31.114382 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0"} err="failed to get container status \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\": rpc error: code = NotFound desc = an error occurred when try to find container \"61c762bdb296a9258f404c12e6f2b832d0cad2315914056a12aa612496fedff0\": not found" Sep 9 00:42:31.114421 kubelet[2470]: I0909 00:42:31.114397 2470 scope.go:117] "RemoveContainer" containerID="4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc" Sep 9 00:42:31.114609 containerd[1447]: time="2025-09-09T00:42:31.114569677Z" level=error msg="ContainerStatus for \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\": not found" Sep 9 00:42:31.114763 kubelet[2470]: E0909 00:42:31.114731 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\": not found" containerID="4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc" Sep 9 00:42:31.114803 kubelet[2470]: I0909 00:42:31.114779 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc"} err="failed to get container status \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4506398341978f1857919a34432284506b8e26975dbb4f7a8704feabf1578bdc\": not found" Sep 9 00:42:31.114803 kubelet[2470]: I0909 00:42:31.114796 2470 scope.go:117] "RemoveContainer" containerID="865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72" Sep 9 00:42:31.115839 containerd[1447]: time="2025-09-09T00:42:31.115025328Z" level=error msg="ContainerStatus for \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\": not found" Sep 9 00:42:31.115960 kubelet[2470]: E0909 00:42:31.115151 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\": not found" containerID="865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72" Sep 9 00:42:31.115960 kubelet[2470]: I0909 00:42:31.115173 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72"} err="failed to get container status \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\": rpc error: code = NotFound desc = an error occurred when try to find container \"865726d17d155da0fda6fa08950c2ecfa9d294fd92ff0e41caf832f46c43aa72\": not found" Sep 9 00:42:31.115960 kubelet[2470]: I0909 00:42:31.115214 2470 scope.go:117] "RemoveContainer" containerID="66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0" Sep 9 00:42:31.116446 containerd[1447]: time="2025-09-09T00:42:31.116417280Z" level=info msg="RemoveContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\"" Sep 9 00:42:31.129936 containerd[1447]: time="2025-09-09T00:42:31.129884271Z" level=info msg="RemoveContainer for \"66db8a92c458f11130502103525c7d06d611a43fd06215fcb3616daa8b5af3f0\" returns successfully" Sep 9 00:42:31.896027 sshd[4124]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:31.906127 systemd[1]: sshd@22-10.0.0.154:22-10.0.0.1:50706.service: Deactivated successfully. Sep 9 00:42:31.907932 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:42:31.909753 systemd[1]: session-23.scope: Consumed 1.900s CPU time. Sep 9 00:42:31.910840 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:42:31.925949 systemd[1]: Started sshd@23-10.0.0.154:22-10.0.0.1:57156.service - OpenSSH per-connection server daemon (10.0.0.1:57156). Sep 9 00:42:31.926773 systemd-logind[1423]: Removed session 23. Sep 9 00:42:31.963422 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 57156 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:31.964733 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:31.968493 systemd-logind[1423]: New session 24 of user core. Sep 9 00:42:31.974836 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:42:32.798355 kubelet[2470]: I0909 00:42:32.797500 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989f0d3f-74d1-49f8-bc4d-6b11648f041d" path="/var/lib/kubelet/pods/989f0d3f-74d1-49f8-bc4d-6b11648f041d/volumes" Sep 9 00:42:32.798355 kubelet[2470]: I0909 00:42:32.797891 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" path="/var/lib/kubelet/pods/bb044a22-e81d-4d65-aca4-4f81a978615e/volumes" Sep 9 00:42:32.868698 kubelet[2470]: E0909 00:42:32.868648 2470 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:42:33.332281 sshd[4290]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:33.345309 systemd[1]: sshd@23-10.0.0.154:22-10.0.0.1:57156.service: Deactivated successfully. Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345823 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="mount-bpf-fs" Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345845 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="clean-cilium-state" Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345852 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="mount-cgroup" Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345859 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="apply-sysctl-overwrites" Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345865 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="989f0d3f-74d1-49f8-bc4d-6b11648f041d" containerName="cilium-operator" Sep 9 00:42:33.348951 kubelet[2470]: E0909 00:42:33.345870 2470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="cilium-agent" Sep 9 00:42:33.348951 kubelet[2470]: I0909 00:42:33.345894 2470 memory_manager.go:354] "RemoveStaleState removing state" podUID="989f0d3f-74d1-49f8-bc4d-6b11648f041d" containerName="cilium-operator" Sep 9 00:42:33.348951 kubelet[2470]: I0909 00:42:33.345900 2470 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb044a22-e81d-4d65-aca4-4f81a978615e" containerName="cilium-agent" Sep 9 00:42:33.348869 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:42:33.350736 systemd[1]: session-24.scope: Consumed 1.267s CPU time. Sep 9 00:42:33.354198 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:42:33.367042 systemd[1]: Started sshd@24-10.0.0.154:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). Sep 9 00:42:33.367823 systemd-logind[1423]: Removed session 24. Sep 9 00:42:33.378498 systemd[1]: Created slice kubepods-burstable-podf12e2981_29ab_4edd_b728_c3b8e252d9f0.slice - libcontainer container kubepods-burstable-podf12e2981_29ab_4edd_b728_c3b8e252d9f0.slice. Sep 9 00:42:33.407138 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:33.408454 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:33.411852 systemd-logind[1423]: New session 25 of user core. Sep 9 00:42:33.419179 kubelet[2470]: I0909 00:42:33.419147 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-lib-modules\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419317 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-cilium-run\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419346 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-hostproc\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419367 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-cilium-cgroup\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419384 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f12e2981-29ab-4edd-b728-c3b8e252d9f0-cilium-ipsec-secrets\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419402 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h24\" (UniqueName: \"kubernetes.io/projected/f12e2981-29ab-4edd-b728-c3b8e252d9f0-kube-api-access-x7h24\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419460 kubelet[2470]: I0909 00:42:33.419421 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f12e2981-29ab-4edd-b728-c3b8e252d9f0-hubble-tls\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419617 kubelet[2470]: I0909 00:42:33.419437 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-etc-cni-netd\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419660 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f12e2981-29ab-4edd-b728-c3b8e252d9f0-clustermesh-secrets\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419701 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-host-proc-sys-net\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419718 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-bpf-maps\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419736 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f12e2981-29ab-4edd-b728-c3b8e252d9f0-cilium-config-path\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419750 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-cni-path\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419812 kubelet[2470]: I0909 00:42:33.419764 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-host-proc-sys-kernel\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.419954 kubelet[2470]: I0909 00:42:33.419782 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f12e2981-29ab-4edd-b728-c3b8e252d9f0-xtables-lock\") pod \"cilium-7hd9t\" (UID: \"f12e2981-29ab-4edd-b728-c3b8e252d9f0\") " pod="kube-system/cilium-7hd9t" Sep 9 00:42:33.420834 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:42:33.470765 sshd[4304]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:33.482659 systemd[1]: sshd@24-10.0.0.154:22-10.0.0.1:57162.service: Deactivated successfully. Sep 9 00:42:33.485497 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:42:33.486876 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:42:33.498098 systemd[1]: Started sshd@25-10.0.0.154:22-10.0.0.1:57172.service - OpenSSH per-connection server daemon (10.0.0.1:57172). Sep 9 00:42:33.499508 systemd-logind[1423]: Removed session 25. Sep 9 00:42:33.533420 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 57172 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:42:33.535028 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:42:33.541993 systemd-logind[1423]: New session 26 of user core. Sep 9 00:42:33.557831 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:42:33.682035 kubelet[2470]: E0909 00:42:33.681926 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:33.683741 containerd[1447]: time="2025-09-09T00:42:33.682629405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hd9t,Uid:f12e2981-29ab-4edd-b728-c3b8e252d9f0,Namespace:kube-system,Attempt:0,}" Sep 9 00:42:33.703387 containerd[1447]: time="2025-09-09T00:42:33.703290104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:42:33.703879 containerd[1447]: time="2025-09-09T00:42:33.703840436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:42:33.703988 containerd[1447]: time="2025-09-09T00:42:33.703966199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:42:33.704233 containerd[1447]: time="2025-09-09T00:42:33.704198204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:42:33.728901 systemd[1]: Started cri-containerd-3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7.scope - libcontainer container 3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7. Sep 9 00:42:33.758034 containerd[1447]: time="2025-09-09T00:42:33.757978958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hd9t,Uid:f12e2981-29ab-4edd-b728-c3b8e252d9f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\"" Sep 9 00:42:33.758978 kubelet[2470]: E0909 00:42:33.758952 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:33.777593 containerd[1447]: time="2025-09-09T00:42:33.777549872Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:42:33.787301 containerd[1447]: time="2025-09-09T00:42:33.787150845Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0\"" Sep 9 00:42:33.788049 containerd[1447]: time="2025-09-09T00:42:33.788019425Z" level=info msg="StartContainer for \"7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0\"" Sep 9 00:42:33.820826 systemd[1]: Started cri-containerd-7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0.scope - libcontainer container 7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0. Sep 9 00:42:33.845874 containerd[1447]: time="2025-09-09T00:42:33.845831588Z" level=info msg="StartContainer for \"7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0\" returns successfully" Sep 9 00:42:33.853150 systemd[1]: cri-containerd-7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0.scope: Deactivated successfully. Sep 9 00:42:33.879913 containerd[1447]: time="2025-09-09T00:42:33.879851183Z" level=info msg="shim disconnected" id=7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0 namespace=k8s.io Sep 9 00:42:33.879913 containerd[1447]: time="2025-09-09T00:42:33.879904825Z" level=warning msg="cleaning up after shim disconnected" id=7e344d8abb78637df542c722665d53df621b690d22090f7398d18d1446b5f8a0 namespace=k8s.io Sep 9 00:42:33.879913 containerd[1447]: time="2025-09-09T00:42:33.879913865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:34.061773 kubelet[2470]: E0909 00:42:34.061670 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:34.067404 containerd[1447]: time="2025-09-09T00:42:34.066907307Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:42:34.086451 containerd[1447]: time="2025-09-09T00:42:34.086329890Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2\"" Sep 9 00:42:34.086913 containerd[1447]: time="2025-09-09T00:42:34.086872102Z" level=info msg="StartContainer for \"314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2\"" Sep 9 00:42:34.114830 systemd[1]: Started cri-containerd-314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2.scope - libcontainer container 314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2. Sep 9 00:42:34.137663 containerd[1447]: time="2025-09-09T00:42:34.137309039Z" level=info msg="StartContainer for \"314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2\" returns successfully" Sep 9 00:42:34.142088 systemd[1]: cri-containerd-314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2.scope: Deactivated successfully. Sep 9 00:42:34.161090 containerd[1447]: time="2025-09-09T00:42:34.160878632Z" level=info msg="shim disconnected" id=314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2 namespace=k8s.io Sep 9 00:42:34.161090 containerd[1447]: time="2025-09-09T00:42:34.160932633Z" level=warning msg="cleaning up after shim disconnected" id=314084a4de1f87e9c86c39c9821174e0d8f6ea8648008b7d632b4efd6a556bc2 namespace=k8s.io Sep 9 00:42:34.161090 containerd[1447]: time="2025-09-09T00:42:34.160940353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:34.785272 kubelet[2470]: I0909 00:42:34.785173 2470 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:42:34Z","lastTransitionTime":"2025-09-09T00:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:42:35.065177 kubelet[2470]: E0909 00:42:35.064788 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:35.069924 containerd[1447]: time="2025-09-09T00:42:35.069836982Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:42:35.086203 containerd[1447]: time="2025-09-09T00:42:35.086132209Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264\"" Sep 9 00:42:35.087043 containerd[1447]: time="2025-09-09T00:42:35.086636460Z" level=info msg="StartContainer for \"21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264\"" Sep 9 00:42:35.119873 systemd[1]: Started cri-containerd-21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264.scope - libcontainer container 21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264. Sep 9 00:42:35.155102 containerd[1447]: time="2025-09-09T00:42:35.155065920Z" level=info msg="StartContainer for \"21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264\" returns successfully" Sep 9 00:42:35.155314 systemd[1]: cri-containerd-21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264.scope: Deactivated successfully. Sep 9 00:42:35.177368 containerd[1447]: time="2025-09-09T00:42:35.177316315Z" level=info msg="shim disconnected" id=21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264 namespace=k8s.io Sep 9 00:42:35.177368 containerd[1447]: time="2025-09-09T00:42:35.177367036Z" level=warning msg="cleaning up after shim disconnected" id=21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264 namespace=k8s.io Sep 9 00:42:35.177368 containerd[1447]: time="2025-09-09T00:42:35.177375796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:35.524026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21fec2c298fef330f9f24ac5b8ec263d57d0e65d1513185de94c8d1586b05264-rootfs.mount: Deactivated successfully. Sep 9 00:42:36.069932 kubelet[2470]: E0909 00:42:36.069094 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:36.070989 containerd[1447]: time="2025-09-09T00:42:36.070932269Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:42:36.083805 containerd[1447]: time="2025-09-09T00:42:36.083756297Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a\"" Sep 9 00:42:36.084269 containerd[1447]: time="2025-09-09T00:42:36.084230947Z" level=info msg="StartContainer for \"fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a\"" Sep 9 00:42:36.127146 systemd[1]: Started cri-containerd-fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a.scope - libcontainer container fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a. Sep 9 00:42:36.147199 systemd[1]: cri-containerd-fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a.scope: Deactivated successfully. Sep 9 00:42:36.181157 containerd[1447]: time="2025-09-09T00:42:36.181104214Z" level=info msg="StartContainer for \"fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a\" returns successfully" Sep 9 00:42:36.188112 containerd[1447]: time="2025-09-09T00:42:36.173145207Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12e2981_29ab_4edd_b728_c3b8e252d9f0.slice/cri-containerd-fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a.scope/memory.events\": no such file or directory" Sep 9 00:42:36.201690 containerd[1447]: time="2025-09-09T00:42:36.201616203Z" level=info msg="shim disconnected" id=fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a namespace=k8s.io Sep 9 00:42:36.201690 containerd[1447]: time="2025-09-09T00:42:36.201668084Z" level=warning msg="cleaning up after shim disconnected" id=fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a namespace=k8s.io Sep 9 00:42:36.201690 containerd[1447]: time="2025-09-09T00:42:36.201691484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:42:36.524486 systemd[1]: run-containerd-runc-k8s.io-fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a-runc.IdOJiv.mount: Deactivated successfully. Sep 9 00:42:36.524590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0033c9c9e8285892cd579267227054b8d480f6f6f5fc2d2f52779b505c170a-rootfs.mount: Deactivated successfully. Sep 9 00:42:37.074504 kubelet[2470]: E0909 00:42:37.074469 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:37.078036 containerd[1447]: time="2025-09-09T00:42:37.077847662Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:42:37.091232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730803545.mount: Deactivated successfully. Sep 9 00:42:37.103006 containerd[1447]: time="2025-09-09T00:42:37.102921176Z" level=info msg="CreateContainer within sandbox \"3e4896323c4bbf2cfea0bc944dbdc435f5c41ab259ca815cf79e86068fe23ed7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a34684b6102ae3ea9c78fde13014211e0b1a48bd78269c31c2d473e9b7028c5\"" Sep 9 00:42:37.105084 containerd[1447]: time="2025-09-09T00:42:37.103993358Z" level=info msg="StartContainer for \"4a34684b6102ae3ea9c78fde13014211e0b1a48bd78269c31c2d473e9b7028c5\"" Sep 9 00:42:37.140859 systemd[1]: Started cri-containerd-4a34684b6102ae3ea9c78fde13014211e0b1a48bd78269c31c2d473e9b7028c5.scope - libcontainer container 4a34684b6102ae3ea9c78fde13014211e0b1a48bd78269c31c2d473e9b7028c5. Sep 9 00:42:37.169547 containerd[1447]: time="2025-09-09T00:42:37.169494222Z" level=info msg="StartContainer for \"4a34684b6102ae3ea9c78fde13014211e0b1a48bd78269c31c2d473e9b7028c5\" returns successfully" Sep 9 00:42:37.422751 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 00:42:38.080301 kubelet[2470]: E0909 00:42:38.080263 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:38.095965 kubelet[2470]: I0909 00:42:38.095907 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hd9t" podStartSLOduration=5.095890273 podStartE2EDuration="5.095890273s" podCreationTimestamp="2025-09-09 00:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:42:38.095707949 +0000 UTC m=+85.376821822" watchObservedRunningTime="2025-09-09 00:42:38.095890273 +0000 UTC m=+85.377004146" Sep 9 00:42:38.795456 kubelet[2470]: E0909 00:42:38.795416 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:39.684690 kubelet[2470]: E0909 00:42:39.684382 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:40.295665 systemd-networkd[1385]: lxc_health: Link UP Sep 9 00:42:40.303363 systemd-networkd[1385]: lxc_health: Gained carrier Sep 9 00:42:41.690543 kubelet[2470]: E0909 00:42:41.687322 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:41.726778 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 9 00:42:41.794872 kubelet[2470]: E0909 00:42:41.794509 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:42.090670 kubelet[2470]: E0909 00:42:42.090591 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:43.090961 kubelet[2470]: E0909 00:42:43.090923 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:42:46.383954 kubelet[2470]: E0909 00:42:46.383463 2470 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43492->127.0.0.1:41735: write tcp 127.0.0.1:43492->127.0.0.1:41735: write: connection reset by peer Sep 9 00:42:46.398931 sshd[4312]: pam_unix(sshd:session): session closed for user core Sep 9 00:42:46.402455 systemd[1]: sshd@25-10.0.0.154:22-10.0.0.1:57172.service: Deactivated successfully. Sep 9 00:42:46.404232 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:42:46.405833 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:42:46.406866 systemd-logind[1423]: Removed session 26.