Sep 10 00:08:44.866282 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 00:08:44.866302 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Sep 9 22:41:53 -00 2025 Sep 10 00:08:44.866312 kernel: KASLR enabled Sep 10 00:08:44.866318 kernel: efi: EFI v2.7 by EDK II Sep 10 00:08:44.866324 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 10 00:08:44.866329 kernel: random: crng init done Sep 10 00:08:44.866336 kernel: ACPI: Early table checksum verification disabled Sep 10 00:08:44.866342 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 10 00:08:44.866348 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:08:44.866356 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866362 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866400 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866408 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866414 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866421 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866430 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866437 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866443 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:08:44.866449 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 00:08:44.866456 kernel: NUMA: Failed to initialise from firmware Sep 10 00:08:44.866462 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:08:44.866468 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 10 00:08:44.866475 kernel: Zone ranges: Sep 10 00:08:44.866481 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:08:44.866487 kernel: DMA32 empty Sep 10 00:08:44.866494 kernel: Normal empty Sep 10 00:08:44.866501 kernel: Movable zone start for each node Sep 10 00:08:44.866507 kernel: Early memory node ranges Sep 10 00:08:44.866520 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 10 00:08:44.866559 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 10 00:08:44.866567 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 10 00:08:44.866573 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 00:08:44.866579 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 00:08:44.866586 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 00:08:44.866592 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 00:08:44.866598 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:08:44.866605 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 00:08:44.866614 kernel: psci: probing for conduit method from ACPI. Sep 10 00:08:44.866620 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 00:08:44.866627 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 00:08:44.866635 kernel: psci: Trusted OS migration not required Sep 10 00:08:44.866643 kernel: psci: SMC Calling Convention v1.1 Sep 10 00:08:44.866650 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 00:08:44.866657 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 10 00:08:44.866664 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 10 00:08:44.866671 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 00:08:44.866678 kernel: Detected PIPT I-cache on CPU0 Sep 10 00:08:44.866684 kernel: CPU features: detected: GIC system register CPU interface Sep 10 00:08:44.866691 kernel: CPU features: detected: Hardware dirty bit management Sep 10 00:08:44.866698 kernel: CPU features: detected: Spectre-v4 Sep 10 00:08:44.866704 kernel: CPU features: detected: Spectre-BHB Sep 10 00:08:44.866711 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 00:08:44.866718 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 00:08:44.866725 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 00:08:44.866732 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 00:08:44.866739 kernel: alternatives: applying boot alternatives Sep 10 00:08:44.866768 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:08:44.866776 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:08:44.866783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:08:44.866790 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:08:44.866796 kernel: Fallback order for Node 0: 0 Sep 10 00:08:44.866803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 10 00:08:44.866810 kernel: Policy zone: DMA Sep 10 00:08:44.866816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:08:44.866825 kernel: software IO TLB: area num 4. Sep 10 00:08:44.866832 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 10 00:08:44.866839 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Sep 10 00:08:44.866846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:08:44.866853 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:08:44.866860 kernel: rcu: RCU event tracing is enabled. Sep 10 00:08:44.866867 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:08:44.866874 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:08:44.866881 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:08:44.866888 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:08:44.866895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:08:44.866903 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 00:08:44.866909 kernel: GICv3: 256 SPIs implemented Sep 10 00:08:44.866916 kernel: GICv3: 0 Extended SPIs implemented Sep 10 00:08:44.866923 kernel: Root IRQ handler: gic_handle_irq Sep 10 00:08:44.866929 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 00:08:44.866936 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 00:08:44.866943 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 00:08:44.866950 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 00:08:44.866957 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 10 00:08:44.866964 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 10 00:08:44.866971 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 10 00:08:44.866978 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:08:44.866987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:08:44.866994 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 00:08:44.867001 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 00:08:44.867008 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 00:08:44.867015 kernel: arm-pv: using stolen time PV Sep 10 00:08:44.867022 kernel: Console: colour dummy device 80x25 Sep 10 00:08:44.867029 kernel: ACPI: Core revision 20230628 Sep 10 00:08:44.867036 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 00:08:44.867055 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:08:44.867062 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:08:44.867070 kernel: landlock: Up and running. Sep 10 00:08:44.867077 kernel: SELinux: Initializing. Sep 10 00:08:44.867084 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:08:44.867094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:08:44.867102 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:08:44.867109 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:08:44.867116 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:08:44.867123 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:08:44.867130 kernel: Platform MSI: ITS@0x8080000 domain created Sep 10 00:08:44.867138 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 10 00:08:44.867147 kernel: Remapping and enabling EFI services. Sep 10 00:08:44.867154 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:08:44.867161 kernel: Detected PIPT I-cache on CPU1 Sep 10 00:08:44.867168 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 00:08:44.867198 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 10 00:08:44.867207 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:08:44.867214 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 00:08:44.867221 kernel: Detected PIPT I-cache on CPU2 Sep 10 00:08:44.867228 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 00:08:44.867238 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 10 00:08:44.867245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:08:44.867258 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 00:08:44.867267 kernel: Detected PIPT I-cache on CPU3 Sep 10 00:08:44.867274 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 00:08:44.867282 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 10 00:08:44.867289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:08:44.867297 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 00:08:44.867307 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:08:44.867318 kernel: SMP: Total of 4 processors activated. Sep 10 00:08:44.867328 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 00:08:44.867335 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 00:08:44.867342 kernel: CPU features: detected: Common not Private translations Sep 10 00:08:44.867350 kernel: CPU features: detected: CRC32 instructions Sep 10 00:08:44.867357 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 00:08:44.867364 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 00:08:44.867371 kernel: CPU features: detected: LSE atomic instructions Sep 10 00:08:44.867380 kernel: CPU features: detected: Privileged Access Never Sep 10 00:08:44.867387 kernel: CPU features: detected: RAS Extension Support Sep 10 00:08:44.867394 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 00:08:44.867401 kernel: CPU: All CPU(s) started at EL1 Sep 10 00:08:44.867409 kernel: alternatives: applying system-wide alternatives Sep 10 00:08:44.867416 kernel: devtmpfs: initialized Sep 10 00:08:44.867423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:08:44.867431 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:08:44.867438 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:08:44.867446 kernel: SMBIOS 3.0.0 present. Sep 10 00:08:44.867453 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 10 00:08:44.867460 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:08:44.867468 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 00:08:44.867475 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 00:08:44.867482 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 00:08:44.867490 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:08:44.867497 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 10 00:08:44.867505 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:08:44.867517 kernel: cpuidle: using governor menu Sep 10 00:08:44.867526 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 00:08:44.867533 kernel: ASID allocator initialised with 32768 entries Sep 10 00:08:44.867559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:08:44.867569 kernel: Serial: AMBA PL011 UART driver Sep 10 00:08:44.867577 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 00:08:44.867584 kernel: Modules: 0 pages in range for non-PLT usage Sep 10 00:08:44.867591 kernel: Modules: 509008 pages in range for PLT usage Sep 10 00:08:44.867599 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:08:44.867608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:08:44.867616 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 00:08:44.867623 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 00:08:44.867630 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:08:44.867637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:08:44.867645 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 00:08:44.867652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 00:08:44.867659 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:08:44.867667 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:08:44.867675 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:08:44.867682 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:08:44.867689 kernel: ACPI: Interpreter enabled Sep 10 00:08:44.867697 kernel: ACPI: Using GIC for interrupt routing Sep 10 00:08:44.867704 kernel: ACPI: MCFG table detected, 1 entries Sep 10 00:08:44.867712 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 00:08:44.867719 kernel: printk: console [ttyAMA0] enabled Sep 10 00:08:44.867726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:08:44.867868 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:08:44.867945 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 00:08:44.868054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 00:08:44.868133 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 00:08:44.868199 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 00:08:44.868209 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 00:08:44.868217 kernel: PCI host bridge to bus 0000:00 Sep 10 00:08:44.868287 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 00:08:44.868353 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 00:08:44.868452 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 00:08:44.868520 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:08:44.868615 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 10 00:08:44.868692 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:08:44.868772 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 10 00:08:44.868905 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 10 00:08:44.868976 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:08:44.869067 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:08:44.869137 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 10 00:08:44.869243 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 10 00:08:44.869309 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 00:08:44.869369 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 00:08:44.869469 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 00:08:44.869482 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 00:08:44.869489 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 00:08:44.869497 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 00:08:44.869504 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 00:08:44.869511 kernel: iommu: Default domain type: Translated Sep 10 00:08:44.869525 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 00:08:44.869532 kernel: efivars: Registered efivars operations Sep 10 00:08:44.869543 kernel: vgaarb: loaded Sep 10 00:08:44.869550 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 00:08:44.869558 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:08:44.869565 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:08:44.869573 kernel: pnp: PnP ACPI init Sep 10 00:08:44.869659 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 00:08:44.869671 kernel: pnp: PnP ACPI: found 1 devices Sep 10 00:08:44.869678 kernel: NET: Registered PF_INET protocol family Sep 10 00:08:44.869686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:08:44.869696 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:08:44.869703 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:08:44.869711 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:08:44.869718 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:08:44.869726 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:08:44.869733 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:08:44.869741 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:08:44.869748 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:08:44.869756 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:08:44.869764 kernel: kvm [1]: HYP mode not available Sep 10 00:08:44.869771 kernel: Initialise system trusted keyrings Sep 10 00:08:44.869778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:08:44.869785 kernel: Key type asymmetric registered Sep 10 00:08:44.869793 kernel: Asymmetric key parser 'x509' registered Sep 10 00:08:44.869800 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 00:08:44.869807 kernel: io scheduler mq-deadline registered Sep 10 00:08:44.869814 kernel: io scheduler kyber registered Sep 10 00:08:44.869821 kernel: io scheduler bfq registered Sep 10 00:08:44.869830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 00:08:44.869837 kernel: ACPI: button: Power Button [PWRB] Sep 10 00:08:44.869872 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 00:08:44.869952 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 00:08:44.869962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:08:44.869970 kernel: thunder_xcv, ver 1.0 Sep 10 00:08:44.869977 kernel: thunder_bgx, ver 1.0 Sep 10 00:08:44.869984 kernel: nicpf, ver 1.0 Sep 10 00:08:44.869991 kernel: nicvf, ver 1.0 Sep 10 00:08:44.870090 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 00:08:44.870163 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T00:08:44 UTC (1757462924) Sep 10 00:08:44.870174 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 00:08:44.870209 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 10 00:08:44.870218 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 10 00:08:44.870225 kernel: watchdog: Hard watchdog permanently disabled Sep 10 00:08:44.870233 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:08:44.870240 kernel: Segment Routing with IPv6 Sep 10 00:08:44.870251 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:08:44.870258 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:08:44.870265 kernel: Key type dns_resolver registered Sep 10 00:08:44.870272 kernel: registered taskstats version 1 Sep 10 00:08:44.870280 kernel: Loading compiled-in X.509 certificates Sep 10 00:08:44.870287 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: e85a1044dffeb2f9696d4659bfe36fdfbb79b10c' Sep 10 00:08:44.870295 kernel: Key type .fscrypt registered Sep 10 00:08:44.870302 kernel: Key type fscrypt-provisioning registered Sep 10 00:08:44.870309 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:08:44.870317 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:08:44.870325 kernel: ima: No architecture policies found Sep 10 00:08:44.870332 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 00:08:44.870339 kernel: clk: Disabling unused clocks Sep 10 00:08:44.870347 kernel: Freeing unused kernel memory: 39424K Sep 10 00:08:44.870354 kernel: Run /init as init process Sep 10 00:08:44.870361 kernel: with arguments: Sep 10 00:08:44.870368 kernel: /init Sep 10 00:08:44.870375 kernel: with environment: Sep 10 00:08:44.870383 kernel: HOME=/ Sep 10 00:08:44.870390 kernel: TERM=linux Sep 10 00:08:44.870398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:08:44.870407 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:08:44.870416 systemd[1]: Detected virtualization kvm. Sep 10 00:08:44.870424 systemd[1]: Detected architecture arm64. Sep 10 00:08:44.870432 systemd[1]: Running in initrd. Sep 10 00:08:44.870440 systemd[1]: No hostname configured, using default hostname. Sep 10 00:08:44.870448 systemd[1]: Hostname set to . Sep 10 00:08:44.870456 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:08:44.870464 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:08:44.870472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:08:44.870480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:08:44.870488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:08:44.870496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:08:44.870505 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:08:44.870519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:08:44.870531 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:08:44.870539 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:08:44.870547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:08:44.870555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:08:44.870563 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:08:44.870573 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:08:44.870585 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:08:44.870595 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:08:44.870605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:08:44.870615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:08:44.870625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:08:44.870653 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:08:44.870664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:08:44.870672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:08:44.870683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:08:44.870691 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:08:44.870698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:08:44.870706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:08:44.870715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:08:44.870722 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:08:44.870730 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:08:44.870738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:08:44.870747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:08:44.870755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:08:44.870763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:08:44.870771 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:08:44.870779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:08:44.870808 systemd-journald[237]: Collecting audit messages is disabled. Sep 10 00:08:44.870827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:08:44.870835 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:08:44.870845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:08:44.870852 kernel: Bridge firewalling registered Sep 10 00:08:44.870860 systemd-journald[237]: Journal started Sep 10 00:08:44.870879 systemd-journald[237]: Runtime Journal (/run/log/journal/419ed793bd924e879c5d50e53229ecc4) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:08:44.857633 systemd-modules-load[238]: Inserted module 'overlay' Sep 10 00:08:44.871159 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 10 00:08:44.877065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:08:44.879305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:08:44.881130 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:08:44.882100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:08:44.885850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:08:44.888189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:08:44.890310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:08:44.897843 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:08:44.902220 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:08:44.904773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:08:44.908059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:08:44.917004 dracut-cmdline[272]: dracut-dracut-053 Sep 10 00:08:44.918200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:08:44.921264 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:08:44.956707 systemd-resolved[279]: Positive Trust Anchors: Sep 10 00:08:44.956729 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:08:44.956778 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:08:44.963422 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 10 00:08:44.964560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:08:44.968277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:08:45.004098 kernel: SCSI subsystem initialized Sep 10 00:08:45.010055 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:08:45.018074 kernel: iscsi: registered transport (tcp) Sep 10 00:08:45.031059 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:08:45.031092 kernel: QLogic iSCSI HBA Driver Sep 10 00:08:45.075930 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:08:45.088197 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:08:45.106526 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:08:45.106579 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:08:45.106593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:08:45.155071 kernel: raid6: neonx8 gen() 15747 MB/s Sep 10 00:08:45.172058 kernel: raid6: neonx4 gen() 15676 MB/s Sep 10 00:08:45.189057 kernel: raid6: neonx2 gen() 13220 MB/s Sep 10 00:08:45.206063 kernel: raid6: neonx1 gen() 10517 MB/s Sep 10 00:08:45.223057 kernel: raid6: int64x8 gen() 6959 MB/s Sep 10 00:08:45.240057 kernel: raid6: int64x4 gen() 7354 MB/s Sep 10 00:08:45.257064 kernel: raid6: int64x2 gen() 6134 MB/s Sep 10 00:08:45.274071 kernel: raid6: int64x1 gen() 5056 MB/s Sep 10 00:08:45.274098 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s Sep 10 00:08:45.291071 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Sep 10 00:08:45.291093 kernel: raid6: using neon recovery algorithm Sep 10 00:08:45.296267 kernel: xor: measuring software checksum speed Sep 10 00:08:45.296304 kernel: 8regs : 19769 MB/sec Sep 10 00:08:45.297398 kernel: 32regs : 19641 MB/sec Sep 10 00:08:45.297411 kernel: arm64_neon : 26936 MB/sec Sep 10 00:08:45.297420 kernel: xor: using function: arm64_neon (26936 MB/sec) Sep 10 00:08:45.346090 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:08:45.358080 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:08:45.366198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:08:45.377575 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 10 00:08:45.380693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:08:45.386195 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:08:45.397556 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Sep 10 00:08:45.424260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:08:45.436216 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:08:45.475645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:08:45.485227 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:08:45.501924 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:08:45.503953 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:08:45.504956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:08:45.506822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:08:45.517242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:08:45.528256 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:08:45.536436 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 00:08:45.536623 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:08:45.544066 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:08:45.544101 kernel: GPT:9289727 != 19775487 Sep 10 00:08:45.544111 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:08:45.545313 kernel: GPT:9289727 != 19775487 Sep 10 00:08:45.545332 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:08:45.547464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:08:45.548688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:08:45.548807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:08:45.551395 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:08:45.553334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:08:45.553474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:08:45.554909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:08:45.565063 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (507) Sep 10 00:08:45.565097 kernel: BTRFS: device fsid 56932cd9-691c-4ccb-8da6-e6508edf5f69 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Sep 10 00:08:45.566811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:08:45.581071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:08:45.589447 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:08:45.594329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:08:45.599132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:08:45.603207 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:08:45.604071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:08:45.619202 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:08:45.621217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:08:45.626984 disk-uuid[550]: Primary Header is updated. Sep 10 00:08:45.626984 disk-uuid[550]: Secondary Entries is updated. Sep 10 00:08:45.626984 disk-uuid[550]: Secondary Header is updated. Sep 10 00:08:45.631073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:08:45.635565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:08:45.639414 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:08:45.642082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:08:46.639092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:08:46.640078 disk-uuid[551]: The operation has completed successfully. Sep 10 00:08:46.660225 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:08:46.660342 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:08:46.692240 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:08:46.695002 sh[574]: Success Sep 10 00:08:46.704065 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 10 00:08:46.729709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:08:46.737289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:08:46.739142 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:08:46.748099 kernel: BTRFS info (device dm-0): first mount of filesystem 56932cd9-691c-4ccb-8da6-e6508edf5f69 Sep 10 00:08:46.748135 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:08:46.748153 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:08:46.749936 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:08:46.749952 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:08:46.753082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:08:46.754193 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:08:46.754926 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:08:46.757420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:08:46.767525 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:08:46.767577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:08:46.767588 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:08:46.771069 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:08:46.781744 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:08:46.783236 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:08:46.790757 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:08:46.798224 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:08:46.858107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:08:46.869874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:08:46.875311 ignition[674]: Ignition 2.19.0 Sep 10 00:08:46.875993 ignition[674]: Stage: fetch-offline Sep 10 00:08:46.876037 ignition[674]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:46.876059 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:46.876222 ignition[674]: parsed url from cmdline: "" Sep 10 00:08:46.876225 ignition[674]: no config URL provided Sep 10 00:08:46.876231 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:08:46.876237 ignition[674]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:08:46.876262 ignition[674]: op(1): [started] loading QEMU firmware config module Sep 10 00:08:46.876267 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:08:46.882154 ignition[674]: op(1): [finished] loading QEMU firmware config module Sep 10 00:08:46.889566 systemd-networkd[764]: lo: Link UP Sep 10 00:08:46.889578 systemd-networkd[764]: lo: Gained carrier Sep 10 00:08:46.890342 systemd-networkd[764]: Enumeration completed Sep 10 00:08:46.890638 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:08:46.890764 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:08:46.890768 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:08:46.891462 systemd-networkd[764]: eth0: Link UP Sep 10 00:08:46.891465 systemd-networkd[764]: eth0: Gained carrier Sep 10 00:08:46.891471 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:08:46.892471 systemd[1]: Reached target network.target - Network. Sep 10 00:08:46.917088 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:08:46.930732 ignition[674]: parsing config with SHA512: e4346af03917c77f9c47e480396f529332fefea1dd1a3d924fdded614ed8cf540db48ccf6f515687f2f240b962fb7d62e198d8fe7292abc2719b812e8cbe9085 Sep 10 00:08:46.934850 unknown[674]: fetched base config from "system" Sep 10 00:08:46.934859 unknown[674]: fetched user config from "qemu" Sep 10 00:08:46.935269 ignition[674]: fetch-offline: fetch-offline passed Sep 10 00:08:46.935417 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.85 Sep 10 00:08:46.935331 ignition[674]: Ignition finished successfully Sep 10 00:08:46.935426 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Sep 10 00:08:46.937360 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:08:46.939102 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:08:46.947234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:08:46.958582 ignition[771]: Ignition 2.19.0 Sep 10 00:08:46.958592 ignition[771]: Stage: kargs Sep 10 00:08:46.958759 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:46.958769 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:46.959683 ignition[771]: kargs: kargs passed Sep 10 00:08:46.959734 ignition[771]: Ignition finished successfully Sep 10 00:08:46.961968 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:08:46.977248 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:08:46.986583 ignition[778]: Ignition 2.19.0 Sep 10 00:08:46.986594 ignition[778]: Stage: disks Sep 10 00:08:46.986755 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:46.986764 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:46.987652 ignition[778]: disks: disks passed Sep 10 00:08:46.989793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:08:46.987696 ignition[778]: Ignition finished successfully Sep 10 00:08:46.990966 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:08:46.992122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:08:46.993764 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:08:46.995010 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:08:46.996631 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:08:46.999007 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:08:47.012634 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:08:47.015993 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:08:47.018011 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:08:47.061076 kernel: EXT4-fs (vda9): mounted filesystem 43028332-c79c-426f-8992-528d495eb356 r/w with ordered data mode. Quota mode: none. Sep 10 00:08:47.061810 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:08:47.062990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:08:47.072134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:08:47.073712 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:08:47.074995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:08:47.075052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:08:47.080366 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (796) Sep 10 00:08:47.075076 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:08:47.080631 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:08:47.084577 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:08:47.084597 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:08:47.084607 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:08:47.083969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:08:47.088096 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:08:47.089274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:08:47.120271 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:08:47.124450 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:08:47.128290 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:08:47.132356 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:08:47.205877 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:08:47.214176 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:08:47.215618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:08:47.221056 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:08:47.235893 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:08:47.237973 ignition[911]: INFO : Ignition 2.19.0 Sep 10 00:08:47.237973 ignition[911]: INFO : Stage: mount Sep 10 00:08:47.237973 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:47.237973 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:47.240951 ignition[911]: INFO : mount: mount passed Sep 10 00:08:47.240951 ignition[911]: INFO : Ignition finished successfully Sep 10 00:08:47.242500 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:08:47.252193 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:08:47.747806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:08:47.760230 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:08:47.766505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (925) Sep 10 00:08:47.766541 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:08:47.766553 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:08:47.767190 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:08:47.770062 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:08:47.770890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:08:47.792025 ignition[943]: INFO : Ignition 2.19.0 Sep 10 00:08:47.792025 ignition[943]: INFO : Stage: files Sep 10 00:08:47.793366 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:47.793366 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:47.793366 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:08:47.796153 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:08:47.796153 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:08:47.798290 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:08:47.798290 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:08:47.798290 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:08:47.798290 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 00:08:47.798290 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 10 00:08:47.796712 unknown[943]: wrote ssh authorized keys file for user: core Sep 10 00:08:47.941272 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:08:48.663014 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 00:08:48.664620 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:08:48.664620 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 00:08:48.870751 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:08:48.901960 systemd-networkd[764]: eth0: Gained IPv6LL Sep 10 00:08:48.972506 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:08:48.972506 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:08:48.975304 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 10 00:08:49.330859 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:08:49.776723 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:08:49.776723 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 00:08:49.779548 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:08:49.795610 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:08:49.799874 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:08:49.801095 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:08:49.801095 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:08:49.801095 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:08:49.801095 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:08:49.801095 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:08:49.801095 ignition[943]: INFO : files: files passed Sep 10 00:08:49.801095 ignition[943]: INFO : Ignition finished successfully Sep 10 00:08:49.802686 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:08:49.818248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:08:49.821210 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:08:49.823302 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:08:49.825104 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:08:49.829028 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:08:49.832311 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:08:49.832311 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:08:49.834813 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:08:49.834192 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:08:49.838340 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:08:49.853274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:08:49.872981 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:08:49.873119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:08:49.874780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:08:49.876122 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:08:49.877457 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:08:49.878538 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:08:49.893177 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:08:49.895675 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:08:49.907748 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:08:49.908933 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:08:49.910649 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:08:49.911926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:08:49.912129 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:08:49.914181 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:08:49.915776 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:08:49.916985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:08:49.918346 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:08:49.919904 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:08:49.921460 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:08:49.922956 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:08:49.924501 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:08:49.925975 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:08:49.927343 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:08:49.928576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:08:49.928714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:08:49.930600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:08:49.932107 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:08:49.933672 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:08:49.937106 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:08:49.939075 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:08:49.939208 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:08:49.941306 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:08:49.941438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:08:49.943095 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:08:49.944407 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:08:49.944574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:08:49.945898 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:08:49.947186 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:08:49.948938 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:08:49.949033 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:08:49.950289 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:08:49.950381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:08:49.951622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:08:49.951735 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:08:49.953095 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:08:49.953206 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:08:49.972277 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:08:49.972992 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:08:49.973159 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:08:49.978489 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:08:49.979846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:08:49.979991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:08:49.981765 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:08:49.981875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:08:49.985657 ignition[996]: INFO : Ignition 2.19.0 Sep 10 00:08:49.987715 ignition[996]: INFO : Stage: umount Sep 10 00:08:49.987715 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:08:49.987715 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:08:49.987715 ignition[996]: INFO : umount: umount passed Sep 10 00:08:49.987715 ignition[996]: INFO : Ignition finished successfully Sep 10 00:08:49.990068 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:08:49.990162 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:08:49.994990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:08:49.995508 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:08:49.995644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:08:49.997673 systemd[1]: Stopped target network.target - Network. Sep 10 00:08:49.998827 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:08:49.998900 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:08:50.000474 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:08:50.000531 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:08:50.001829 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:08:50.001869 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:08:50.003293 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:08:50.003337 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:08:50.005058 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:08:50.006587 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:08:50.012102 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 10 00:08:50.013514 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:08:50.013626 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:08:50.015179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:08:50.015214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:08:50.023154 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:08:50.023829 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:08:50.023885 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:08:50.025663 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:08:50.027961 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:08:50.028066 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:08:50.031395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:08:50.031454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:08:50.032354 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:08:50.032394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:08:50.034025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:08:50.034083 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:08:50.036527 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:08:50.036680 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:08:50.039431 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:08:50.039524 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:08:50.041263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:08:50.041327 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:08:50.042490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:08:50.042538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:08:50.043868 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:08:50.043916 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:08:50.046397 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:08:50.046448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:08:50.050430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:08:50.050491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:08:50.067256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:08:50.068089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:08:50.068154 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:08:50.069828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:08:50.069870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:08:50.071727 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:08:50.073062 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:08:50.074879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:08:50.074961 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:08:50.076944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:08:50.078421 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:08:50.078480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:08:50.080843 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:08:50.090370 systemd[1]: Switching root. Sep 10 00:08:50.128306 systemd-journald[237]: Journal stopped Sep 10 00:08:50.792779 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 10 00:08:50.792830 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:08:50.792845 kernel: SELinux: policy capability open_perms=1 Sep 10 00:08:50.792855 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:08:50.792864 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:08:50.792873 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:08:50.792887 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:08:50.792896 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:08:50.792905 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:08:50.792915 kernel: audit: type=1403 audit(1757462930.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:08:50.792928 systemd[1]: Successfully loaded SELinux policy in 33.230ms. Sep 10 00:08:50.792952 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.588ms. Sep 10 00:08:50.792964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:08:50.792975 systemd[1]: Detected virtualization kvm. Sep 10 00:08:50.792985 systemd[1]: Detected architecture arm64. Sep 10 00:08:50.792996 systemd[1]: Detected first boot. Sep 10 00:08:50.793006 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:08:50.793016 zram_generator::config[1042]: No configuration found. Sep 10 00:08:50.793027 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:08:50.793093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:08:50.793108 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:08:50.793119 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:08:50.793129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:08:50.793140 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:08:50.793150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:08:50.793160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:08:50.793171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:08:50.793181 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:08:50.793193 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:08:50.793204 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:08:50.793215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:08:50.793227 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:08:50.793238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:08:50.793249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:08:50.793259 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:08:50.793270 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:08:50.793280 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 00:08:50.793293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:08:50.793304 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:08:50.793315 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:08:50.793325 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:08:50.793336 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:08:50.793346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:08:50.793356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:08:50.793367 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:08:50.793378 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:08:50.793389 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:08:50.793400 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:08:50.793410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:08:50.793420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:08:50.793431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:08:50.793441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:08:50.793453 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:08:50.793463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:08:50.793475 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:08:50.793486 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:08:50.793504 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:08:50.793516 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:08:50.793527 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:08:50.793538 systemd[1]: Reached target machines.target - Containers. Sep 10 00:08:50.793549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:08:50.793559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:08:50.793572 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:08:50.793582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:08:50.793592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:08:50.793602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:08:50.793613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:08:50.793623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:08:50.793634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:08:50.793645 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:08:50.793655 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:08:50.793667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:08:50.793678 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:08:50.793690 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:08:50.793700 kernel: fuse: init (API version 7.39) Sep 10 00:08:50.793709 kernel: loop: module loaded Sep 10 00:08:50.793719 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:08:50.793729 kernel: ACPI: bus type drm_connector registered Sep 10 00:08:50.793739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:08:50.793749 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:08:50.793761 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:08:50.793772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:08:50.793782 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:08:50.793793 systemd[1]: Stopped verity-setup.service. Sep 10 00:08:50.793822 systemd-journald[1107]: Collecting audit messages is disabled. Sep 10 00:08:50.793843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:08:50.793854 systemd-journald[1107]: Journal started Sep 10 00:08:50.793876 systemd-journald[1107]: Runtime Journal (/run/log/journal/419ed793bd924e879c5d50e53229ecc4) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:08:50.621120 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:08:50.634938 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:08:50.635287 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:08:50.796872 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:08:50.797450 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:08:50.798392 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:08:50.799265 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:08:50.800188 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:08:50.801094 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:08:50.802047 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:08:50.804189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:08:50.805452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:08:50.805739 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:08:50.808400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:08:50.808626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:08:50.809883 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:08:50.810010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:08:50.812422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:08:50.812572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:08:50.813732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:08:50.813862 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:08:50.814960 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:08:50.815123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:08:50.816214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:08:50.817331 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:08:50.818503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:08:50.829842 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:08:50.839156 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:08:50.840950 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:08:50.841880 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:08:50.841911 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:08:50.843612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:08:50.846581 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:08:50.848450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:08:50.849333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:08:50.850562 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:08:50.852253 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:08:50.853152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:08:50.856197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:08:50.857227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:08:50.860224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:08:50.862431 systemd-journald[1107]: Time spent on flushing to /var/log/journal/419ed793bd924e879c5d50e53229ecc4 is 19.728ms for 859 entries. Sep 10 00:08:50.862431 systemd-journald[1107]: System Journal (/var/log/journal/419ed793bd924e879c5d50e53229ecc4) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:08:50.891597 systemd-journald[1107]: Received client request to flush runtime journal. Sep 10 00:08:50.891645 kernel: loop0: detected capacity change from 0 to 114432 Sep 10 00:08:50.891659 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:08:50.863451 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:08:50.866031 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:08:50.871164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:08:50.872468 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:08:50.875207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:08:50.876308 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:08:50.878242 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:08:50.883950 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:08:50.892304 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:08:50.895214 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:08:50.897091 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:08:50.912510 kernel: loop1: detected capacity change from 0 to 203944 Sep 10 00:08:50.913519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:08:50.921830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:08:50.923510 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:08:50.927160 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:08:50.931610 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:08:50.941311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:08:50.955225 kernel: loop2: detected capacity change from 0 to 114328 Sep 10 00:08:50.959282 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 10 00:08:50.959297 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 10 00:08:50.966939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:08:50.997156 kernel: loop3: detected capacity change from 0 to 114432 Sep 10 00:08:51.001118 kernel: loop4: detected capacity change from 0 to 203944 Sep 10 00:08:51.006138 kernel: loop5: detected capacity change from 0 to 114328 Sep 10 00:08:51.008672 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:08:51.009055 (sd-merge)[1178]: Merged extensions into '/usr'. Sep 10 00:08:51.012654 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:08:51.012672 systemd[1]: Reloading... Sep 10 00:08:51.068589 zram_generator::config[1202]: No configuration found. Sep 10 00:08:51.121531 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:08:51.171513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:08:51.207776 systemd[1]: Reloading finished in 194 ms. Sep 10 00:08:51.239183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:08:51.242223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:08:51.251219 systemd[1]: Starting ensure-sysext.service... Sep 10 00:08:51.252875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:08:51.259620 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:08:51.259635 systemd[1]: Reloading... Sep 10 00:08:51.270004 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:08:51.270725 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:08:51.271474 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:08:51.271808 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 10 00:08:51.271866 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 10 00:08:51.274372 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:08:51.274468 systemd-tmpfiles[1240]: Skipping /boot Sep 10 00:08:51.281694 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:08:51.281707 systemd-tmpfiles[1240]: Skipping /boot Sep 10 00:08:51.306171 zram_generator::config[1273]: No configuration found. Sep 10 00:08:51.382368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:08:51.417909 systemd[1]: Reloading finished in 157 ms. Sep 10 00:08:51.443102 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:08:51.460525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:08:51.467585 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:08:51.469978 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:08:51.472098 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:08:51.476372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:08:51.488294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:08:51.491054 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:08:51.494090 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:08:51.497859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:08:51.512352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:08:51.514650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:08:51.515902 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Sep 10 00:08:51.517246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:08:51.518105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:08:51.518566 augenrules[1327]: No rules Sep 10 00:08:51.519599 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:08:51.522006 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:08:51.523907 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:08:51.527070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:08:51.528530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:08:51.528661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:08:51.530295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:08:51.530444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:08:51.531895 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:08:51.532024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:08:51.533595 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:08:51.536929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:08:51.547871 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:08:51.553614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:08:51.565533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:08:51.569033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:08:51.573476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:08:51.576308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:08:51.577799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:08:51.579983 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:08:51.581630 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:08:51.582555 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:08:51.585176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:08:51.585312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:08:51.586624 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:08:51.586776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:08:51.588150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:08:51.588287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:08:51.589697 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:08:51.589822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:08:51.592981 systemd[1]: Finished ensure-sysext.service. Sep 10 00:08:51.593110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1357) Sep 10 00:08:51.603308 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 00:08:51.606345 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:08:51.606406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:08:51.619229 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:08:51.639170 systemd-resolved[1307]: Positive Trust Anchors: Sep 10 00:08:51.639520 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:08:51.639606 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:08:51.647956 systemd-resolved[1307]: Defaulting to hostname 'linux'. Sep 10 00:08:51.649660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:08:51.651293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:08:51.651603 systemd-networkd[1370]: lo: Link UP Sep 10 00:08:51.651613 systemd-networkd[1370]: lo: Gained carrier Sep 10 00:08:51.652328 systemd-networkd[1370]: Enumeration completed Sep 10 00:08:51.652421 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:08:51.652970 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:08:51.652978 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:08:51.653820 systemd-networkd[1370]: eth0: Link UP Sep 10 00:08:51.653828 systemd-networkd[1370]: eth0: Gained carrier Sep 10 00:08:51.653861 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:08:51.654468 systemd[1]: Reached target network.target - Network. Sep 10 00:08:51.660322 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:08:51.662217 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:08:51.665868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:08:51.668439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:08:51.669903 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:08:51.682523 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:08:51.682572 systemd-timesyncd[1379]: Initial clock synchronization to Wed 2025-09-10 00:08:51.762322 UTC. Sep 10 00:08:51.682664 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:08:51.683785 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:08:51.691111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:08:51.721308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:08:51.726897 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:08:51.732221 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:08:51.745217 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:08:51.760861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:08:51.779553 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:08:51.780755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:08:51.783142 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:08:51.783978 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:08:51.785011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:08:51.786144 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:08:51.787034 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:08:51.787948 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:08:51.789108 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:08:51.789145 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:08:51.789802 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:08:51.791471 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:08:51.793680 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:08:51.802020 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:08:51.804027 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:08:51.805353 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:08:51.806300 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:08:51.807017 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:08:51.807733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:08:51.807763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:08:51.808703 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:08:51.810610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:08:51.813180 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:08:51.815220 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:08:51.817834 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:08:51.818932 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:08:51.822239 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:08:51.823643 jq[1406]: false Sep 10 00:08:51.825210 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:08:51.827196 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:08:51.832097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:08:51.836933 dbus-daemon[1405]: [system] SELinux support is enabled Sep 10 00:08:51.838294 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:08:51.839558 extend-filesystems[1407]: Found loop3 Sep 10 00:08:51.839558 extend-filesystems[1407]: Found loop4 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found loop5 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda1 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda2 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda3 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found usr Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda4 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda6 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda7 Sep 10 00:08:51.841612 extend-filesystems[1407]: Found vda9 Sep 10 00:08:51.841612 extend-filesystems[1407]: Checking size of /dev/vda9 Sep 10 00:08:51.839770 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:08:51.840209 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:08:51.840852 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:08:51.853084 jq[1422]: true Sep 10 00:08:51.845233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:08:51.847348 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:08:51.853427 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:08:51.860532 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:08:51.860774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:08:51.861082 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:08:51.861246 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:08:51.863942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:08:51.864129 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:08:51.868192 extend-filesystems[1407]: Resized partition /dev/vda9 Sep 10 00:08:51.876766 update_engine[1420]: I20250910 00:08:51.876547 1420 main.cc:92] Flatcar Update Engine starting Sep 10 00:08:51.882140 jq[1431]: true Sep 10 00:08:51.882370 update_engine[1420]: I20250910 00:08:51.881168 1420 update_check_scheduler.cc:74] Next update check in 5m0s Sep 10 00:08:51.884236 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:08:51.884756 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:08:51.885439 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:08:51.887853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:08:51.887878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:08:51.888998 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:08:51.889015 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:08:51.896185 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:08:51.900862 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 00:08:51.918635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1360) Sep 10 00:08:51.918699 tar[1429]: linux-arm64/helm Sep 10 00:08:51.901160 systemd-logind[1418]: New seat seat0. Sep 10 00:08:51.909374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:08:51.910627 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:08:51.929504 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:08:51.949416 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:08:51.949416 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:08:51.949416 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:08:51.952503 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Sep 10 00:08:51.955268 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:08:51.955471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:08:51.959784 bash[1459]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:08:51.963970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:08:51.966723 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:08:51.968002 locksmithd[1446]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:08:52.037480 containerd[1439]: time="2025-09-10T00:08:52.037401411Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:08:52.063750 containerd[1439]: time="2025-09-10T00:08:52.063703268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065142 containerd[1439]: time="2025-09-10T00:08:52.065089495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065142 containerd[1439]: time="2025-09-10T00:08:52.065122137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:08:52.065208 containerd[1439]: time="2025-09-10T00:08:52.065147549Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:08:52.065323 containerd[1439]: time="2025-09-10T00:08:52.065299855Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:08:52.065368 containerd[1439]: time="2025-09-10T00:08:52.065324741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065394 containerd[1439]: time="2025-09-10T00:08:52.065374917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065394 containerd[1439]: time="2025-09-10T00:08:52.065387198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065558 containerd[1439]: time="2025-09-10T00:08:52.065535505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065558 containerd[1439]: time="2025-09-10T00:08:52.065557280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065608 containerd[1439]: time="2025-09-10T00:08:52.065569804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065608 containerd[1439]: time="2025-09-10T00:08:52.065579540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065692 containerd[1439]: time="2025-09-10T00:08:52.065648785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065877 containerd[1439]: time="2025-09-10T00:08:52.065830946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065950 containerd[1439]: time="2025-09-10T00:08:52.065931581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:08:52.065971 containerd[1439]: time="2025-09-10T00:08:52.065950326Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:08:52.066051 containerd[1439]: time="2025-09-10T00:08:52.066035125Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:08:52.066112 containerd[1439]: time="2025-09-10T00:08:52.066097461Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:08:52.069672 containerd[1439]: time="2025-09-10T00:08:52.069642957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:08:52.069785 containerd[1439]: time="2025-09-10T00:08:52.069688487Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:08:52.069785 containerd[1439]: time="2025-09-10T00:08:52.069704647Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:08:52.069785 containerd[1439]: time="2025-09-10T00:08:52.069720241Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:08:52.069785 containerd[1439]: time="2025-09-10T00:08:52.069741935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:08:52.069928 containerd[1439]: time="2025-09-10T00:08:52.069893272Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:08:52.070139 containerd[1439]: time="2025-09-10T00:08:52.070119711Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:08:52.070268 containerd[1439]: time="2025-09-10T00:08:52.070247332Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:08:52.070299 containerd[1439]: time="2025-09-10T00:08:52.070269835Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:08:52.070299 containerd[1439]: time="2025-09-10T00:08:52.070292095Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:08:52.070334 containerd[1439]: time="2025-09-10T00:08:52.070305669Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070334 containerd[1439]: time="2025-09-10T00:08:52.070319809Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070376 containerd[1439]: time="2025-09-10T00:08:52.070332131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070376 containerd[1439]: time="2025-09-10T00:08:52.070346634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070376 containerd[1439]: time="2025-09-10T00:08:52.070361218Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070376 containerd[1439]: time="2025-09-10T00:08:52.070373419Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070449 containerd[1439]: time="2025-09-10T00:08:52.070386064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070449 containerd[1439]: time="2025-09-10T00:08:52.070397982Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:08:52.070449 containerd[1439]: time="2025-09-10T00:08:52.070421212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070449 containerd[1439]: time="2025-09-10T00:08:52.070435513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070449 containerd[1439]: time="2025-09-10T00:08:52.070447875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070459470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070471549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070484194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070495466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070507424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070523341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070537885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070548 containerd[1439]: time="2025-09-10T00:08:52.070549924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070562448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070574164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070589677Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070612261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070623896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070684 containerd[1439]: time="2025-09-10T00:08:52.070634642Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:08:52.070791 containerd[1439]: time="2025-09-10T00:08:52.070742468Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:08:52.070791 containerd[1439]: time="2025-09-10T00:08:52.070759759Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:08:52.070791 containerd[1439]: time="2025-09-10T00:08:52.070771475Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:08:52.070791 containerd[1439]: time="2025-09-10T00:08:52.070782989Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:08:52.070859 containerd[1439]: time="2025-09-10T00:08:52.070792038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.070859 containerd[1439]: time="2025-09-10T00:08:52.070804198Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:08:52.070859 containerd[1439]: time="2025-09-10T00:08:52.070813086Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:08:52.070859 containerd[1439]: time="2025-09-10T00:08:52.070823792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:08:52.071338 containerd[1439]: time="2025-09-10T00:08:52.071177126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:08:52.071338 containerd[1439]: time="2025-09-10T00:08:52.071249562Z" level=info msg="Connect containerd service" Sep 10 00:08:52.071338 containerd[1439]: time="2025-09-10T00:08:52.071291335Z" level=info msg="using legacy CRI server" Sep 10 00:08:52.071338 containerd[1439]: time="2025-09-10T00:08:52.071298688Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:08:52.071692 containerd[1439]: time="2025-09-10T00:08:52.071391566Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:08:52.072024 containerd[1439]: time="2025-09-10T00:08:52.071971096Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:08:52.072233 containerd[1439]: time="2025-09-10T00:08:52.072194505Z" level=info msg="Start subscribing containerd event" Sep 10 00:08:52.072269 containerd[1439]: time="2025-09-10T00:08:52.072248277Z" level=info msg="Start recovering state" Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072315017Z" level=info msg="Start event monitor" Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072347457Z" level=info msg="Start snapshots syncer" Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072357476Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072364950Z" level=info msg="Start streaming server" Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072711618Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072759451Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:08:52.073206 containerd[1439]: time="2025-09-10T00:08:52.072824292Z" level=info msg="containerd successfully booted in 0.036179s" Sep 10 00:08:52.072908 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:08:52.265423 tar[1429]: linux-arm64/LICENSE Sep 10 00:08:52.265607 tar[1429]: linux-arm64/README.md Sep 10 00:08:52.278353 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:08:52.755754 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:08:52.779549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:08:52.790421 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:08:52.796206 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:08:52.798072 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:08:52.801236 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:08:52.815120 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:08:52.824463 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:08:52.827009 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 00:08:52.828726 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:08:53.061780 systemd-networkd[1370]: eth0: Gained IPv6LL Sep 10 00:08:53.064694 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:08:53.067710 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:08:53.076345 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:08:53.078523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:08:53.080276 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:08:53.095429 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:08:53.095636 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:08:53.097041 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:08:53.099321 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:08:53.641063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:08:53.642372 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:08:53.645114 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:08:53.647153 systemd[1]: Startup finished in 528ms (kernel) + 5.596s (initrd) + 3.388s (userspace) = 9.513s. Sep 10 00:08:54.040171 kubelet[1517]: E0910 00:08:54.040070 1517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:08:54.042707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:08:54.042856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:08:57.423234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:08:57.424352 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:46568.service - OpenSSH per-connection server daemon (10.0.0.1:46568). Sep 10 00:08:57.474409 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:57.476252 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:57.483604 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:08:57.492282 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:08:57.494077 systemd-logind[1418]: New session 1 of user core. Sep 10 00:08:57.501277 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:08:57.503350 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:08:57.509451 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:08:57.590916 systemd[1535]: Queued start job for default target default.target. Sep 10 00:08:57.601954 systemd[1535]: Created slice app.slice - User Application Slice. Sep 10 00:08:57.601983 systemd[1535]: Reached target paths.target - Paths. Sep 10 00:08:57.601995 systemd[1535]: Reached target timers.target - Timers. Sep 10 00:08:57.603231 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:08:57.612783 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:08:57.612840 systemd[1535]: Reached target sockets.target - Sockets. Sep 10 00:08:57.612853 systemd[1535]: Reached target basic.target - Basic System. Sep 10 00:08:57.612885 systemd[1535]: Reached target default.target - Main User Target. Sep 10 00:08:57.612910 systemd[1535]: Startup finished in 98ms. Sep 10 00:08:57.613208 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:08:57.614675 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:08:57.675670 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:46574.service - OpenSSH per-connection server daemon (10.0.0.1:46574). Sep 10 00:08:57.707496 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 46574 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:57.708743 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:57.713226 systemd-logind[1418]: New session 2 of user core. Sep 10 00:08:57.723216 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:08:57.775780 sshd[1546]: pam_unix(sshd:session): session closed for user core Sep 10 00:08:57.796408 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:46574.service: Deactivated successfully. Sep 10 00:08:57.797801 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:08:57.800125 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:08:57.801285 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:46576.service - OpenSSH per-connection server daemon (10.0.0.1:46576). Sep 10 00:08:57.801933 systemd-logind[1418]: Removed session 2. Sep 10 00:08:57.834192 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 46576 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:57.835509 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:57.839659 systemd-logind[1418]: New session 3 of user core. Sep 10 00:08:57.853206 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:08:57.902470 sshd[1553]: pam_unix(sshd:session): session closed for user core Sep 10 00:08:57.915501 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:46576.service: Deactivated successfully. Sep 10 00:08:57.916988 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:08:57.919343 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:08:57.920614 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). Sep 10 00:08:57.921500 systemd-logind[1418]: Removed session 3. Sep 10 00:08:57.954541 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:57.955916 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:57.960033 systemd-logind[1418]: New session 4 of user core. Sep 10 00:08:57.968221 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:08:58.022335 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 10 00:08:58.035290 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:46582.service: Deactivated successfully. Sep 10 00:08:58.036627 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:08:58.039197 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:08:58.040342 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:46592.service - OpenSSH per-connection server daemon (10.0.0.1:46592). Sep 10 00:08:58.041123 systemd-logind[1418]: Removed session 4. Sep 10 00:08:58.073014 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 46592 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:58.074289 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:58.078239 systemd-logind[1418]: New session 5 of user core. Sep 10 00:08:58.088265 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:08:58.146013 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:08:58.146318 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:08:58.159937 sudo[1570]: pam_unix(sudo:session): session closed for user root Sep 10 00:08:58.161870 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 10 00:08:58.173577 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:46592.service: Deactivated successfully. Sep 10 00:08:58.175778 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:08:58.177735 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:08:58.179395 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Sep 10 00:08:58.180439 systemd-logind[1418]: Removed session 5. Sep 10 00:08:58.213133 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:58.214477 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:58.219295 systemd-logind[1418]: New session 6 of user core. Sep 10 00:08:58.227251 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:08:58.278723 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:08:58.279295 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:08:58.282568 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 10 00:08:58.287281 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:08:58.287539 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:08:58.307476 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:08:58.308414 auditctl[1582]: No rules Sep 10 00:08:58.308755 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:08:58.308918 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:08:58.310959 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:08:58.333699 augenrules[1600]: No rules Sep 10 00:08:58.334881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:08:58.336114 sudo[1578]: pam_unix(sudo:session): session closed for user root Sep 10 00:08:58.337478 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 10 00:08:58.357352 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:46602.service: Deactivated successfully. Sep 10 00:08:58.358744 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:08:58.359886 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:08:58.372307 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:46616.service - OpenSSH per-connection server daemon (10.0.0.1:46616). Sep 10 00:08:58.373163 systemd-logind[1418]: Removed session 6. Sep 10 00:08:58.401705 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 46616 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:08:58.403140 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:08:58.406769 systemd-logind[1418]: New session 7 of user core. Sep 10 00:08:58.415207 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:08:58.466273 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:08:58.467230 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:08:58.730277 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:08:58.730381 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:08:58.945763 dockerd[1629]: time="2025-09-10T00:08:58.945694374Z" level=info msg="Starting up" Sep 10 00:08:59.104138 dockerd[1629]: time="2025-09-10T00:08:59.104029691Z" level=info msg="Loading containers: start." Sep 10 00:08:59.190074 kernel: Initializing XFRM netlink socket Sep 10 00:08:59.256961 systemd-networkd[1370]: docker0: Link UP Sep 10 00:08:59.277324 dockerd[1629]: time="2025-09-10T00:08:59.277285183Z" level=info msg="Loading containers: done." Sep 10 00:08:59.287695 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2043167245-merged.mount: Deactivated successfully. Sep 10 00:08:59.288679 dockerd[1629]: time="2025-09-10T00:08:59.288617971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:08:59.288748 dockerd[1629]: time="2025-09-10T00:08:59.288718243Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:08:59.288829 dockerd[1629]: time="2025-09-10T00:08:59.288812049Z" level=info msg="Daemon has completed initialization" Sep 10 00:08:59.319550 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:08:59.320024 dockerd[1629]: time="2025-09-10T00:08:59.319894423Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:08:59.836192 containerd[1439]: time="2025-09-10T00:08:59.835909967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:09:00.465569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825231266.mount: Deactivated successfully. Sep 10 00:09:01.315230 containerd[1439]: time="2025-09-10T00:09:01.314526449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:01.315230 containerd[1439]: time="2025-09-10T00:09:01.314927931Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 10 00:09:01.315749 containerd[1439]: time="2025-09-10T00:09:01.315709551Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:01.320322 containerd[1439]: time="2025-09-10T00:09:01.320284846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:01.321569 containerd[1439]: time="2025-09-10T00:09:01.321328330Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.485375407s" Sep 10 00:09:01.321569 containerd[1439]: time="2025-09-10T00:09:01.321371379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 10 00:09:01.322649 containerd[1439]: time="2025-09-10T00:09:01.322605353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:09:02.319693 containerd[1439]: time="2025-09-10T00:09:02.319646744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:02.320948 containerd[1439]: time="2025-09-10T00:09:02.320901672Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 10 00:09:02.322998 containerd[1439]: time="2025-09-10T00:09:02.322955814Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:02.325850 containerd[1439]: time="2025-09-10T00:09:02.325813462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:02.327220 containerd[1439]: time="2025-09-10T00:09:02.327101196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.004371635s" Sep 10 00:09:02.327220 containerd[1439]: time="2025-09-10T00:09:02.327135125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 10 00:09:02.327815 containerd[1439]: time="2025-09-10T00:09:02.327637923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:09:03.314844 containerd[1439]: time="2025-09-10T00:09:03.314780181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:03.315180 containerd[1439]: time="2025-09-10T00:09:03.315124692Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 10 00:09:03.316098 containerd[1439]: time="2025-09-10T00:09:03.316037625Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:03.322896 containerd[1439]: time="2025-09-10T00:09:03.321695962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:03.323303 containerd[1439]: time="2025-09-10T00:09:03.322900485Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 995.229479ms" Sep 10 00:09:03.323303 containerd[1439]: time="2025-09-10T00:09:03.322940136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 10 00:09:03.323603 containerd[1439]: time="2025-09-10T00:09:03.323569820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:09:04.293134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:09:04.303245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:04.304330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407350532.mount: Deactivated successfully. Sep 10 00:09:04.413082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:04.417189 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:09:04.454560 kubelet[1856]: E0910 00:09:04.454503 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:09:04.458259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:09:04.458401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:09:04.762029 containerd[1439]: time="2025-09-10T00:09:04.761283120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:04.762391 containerd[1439]: time="2025-09-10T00:09:04.762103326Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 10 00:09:04.763503 containerd[1439]: time="2025-09-10T00:09:04.763458968Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:04.765518 containerd[1439]: time="2025-09-10T00:09:04.765480586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:04.766148 containerd[1439]: time="2025-09-10T00:09:04.766105560Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.442421161s" Sep 10 00:09:04.766194 containerd[1439]: time="2025-09-10T00:09:04.766149649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 10 00:09:04.766794 containerd[1439]: time="2025-09-10T00:09:04.766765164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:09:05.356173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859196891.mount: Deactivated successfully. Sep 10 00:09:06.109991 containerd[1439]: time="2025-09-10T00:09:06.109285349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.111008 containerd[1439]: time="2025-09-10T00:09:06.110978753Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 00:09:06.112128 containerd[1439]: time="2025-09-10T00:09:06.112097953Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.116074 containerd[1439]: time="2025-09-10T00:09:06.115062151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.116507 containerd[1439]: time="2025-09-10T00:09:06.116375611Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.349489932s" Sep 10 00:09:06.116507 containerd[1439]: time="2025-09-10T00:09:06.116412027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 00:09:06.117425 containerd[1439]: time="2025-09-10T00:09:06.117402349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:09:06.561090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255515798.mount: Deactivated successfully. Sep 10 00:09:06.567398 containerd[1439]: time="2025-09-10T00:09:06.567339252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.568194 containerd[1439]: time="2025-09-10T00:09:06.568151541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 00:09:06.568943 containerd[1439]: time="2025-09-10T00:09:06.568903137Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.571674 containerd[1439]: time="2025-09-10T00:09:06.571640025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:06.575144 containerd[1439]: time="2025-09-10T00:09:06.572594933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 455.161897ms" Sep 10 00:09:06.575144 containerd[1439]: time="2025-09-10T00:09:06.572643488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 00:09:06.575144 containerd[1439]: time="2025-09-10T00:09:06.573196498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:09:07.139887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912589405.mount: Deactivated successfully. Sep 10 00:09:08.711163 containerd[1439]: time="2025-09-10T00:09:08.711112998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:08.712305 containerd[1439]: time="2025-09-10T00:09:08.712274566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 10 00:09:08.713068 containerd[1439]: time="2025-09-10T00:09:08.712987405Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:08.716366 containerd[1439]: time="2025-09-10T00:09:08.716332344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:08.718793 containerd[1439]: time="2025-09-10T00:09:08.718737377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.145498857s" Sep 10 00:09:08.718863 containerd[1439]: time="2025-09-10T00:09:08.718789758Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 10 00:09:14.161417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:14.186175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:14.216026 systemd[1]: Reloading requested from client PID 2004 ('systemctl') (unit session-7.scope)... Sep 10 00:09:14.216055 systemd[1]: Reloading... Sep 10 00:09:14.305067 zram_generator::config[2044]: No configuration found. Sep 10 00:09:14.470767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:09:14.524849 systemd[1]: Reloading finished in 308 ms. Sep 10 00:09:14.580748 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:09:14.580832 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:09:14.581182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:14.583725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:14.695852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:14.701120 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:09:14.746335 kubelet[2088]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:09:14.746335 kubelet[2088]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:09:14.746335 kubelet[2088]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:09:14.746335 kubelet[2088]: I0910 00:09:14.746296 2088 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:09:15.402347 kubelet[2088]: I0910 00:09:15.402277 2088 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:09:15.402347 kubelet[2088]: I0910 00:09:15.402320 2088 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:09:15.402731 kubelet[2088]: I0910 00:09:15.402567 2088 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:09:15.421722 kubelet[2088]: E0910 00:09:15.421616 2088 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:15.423442 kubelet[2088]: I0910 00:09:15.423367 2088 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:09:15.434109 kubelet[2088]: E0910 00:09:15.434036 2088 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:09:15.434109 kubelet[2088]: I0910 00:09:15.434094 2088 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:09:15.438191 kubelet[2088]: I0910 00:09:15.438144 2088 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:09:15.438965 kubelet[2088]: I0910 00:09:15.438918 2088 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:09:15.439148 kubelet[2088]: I0910 00:09:15.439100 2088 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:09:15.439333 kubelet[2088]: I0910 00:09:15.439130 2088 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:09:15.440583 kubelet[2088]: I0910 00:09:15.439439 2088 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:09:15.440583 kubelet[2088]: I0910 00:09:15.439449 2088 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:09:15.440583 kubelet[2088]: I0910 00:09:15.439696 2088 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:09:15.443683 kubelet[2088]: I0910 00:09:15.443327 2088 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:09:15.443683 kubelet[2088]: I0910 00:09:15.443368 2088 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:09:15.443683 kubelet[2088]: I0910 00:09:15.443392 2088 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:09:15.443683 kubelet[2088]: I0910 00:09:15.443470 2088 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:09:15.445679 kubelet[2088]: W0910 00:09:15.445502 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:15.445679 kubelet[2088]: E0910 00:09:15.445646 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:15.448195 kubelet[2088]: W0910 00:09:15.448135 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:15.448284 kubelet[2088]: E0910 00:09:15.448209 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:15.452139 kubelet[2088]: I0910 00:09:15.449742 2088 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:09:15.452139 kubelet[2088]: I0910 00:09:15.450479 2088 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:09:15.452139 kubelet[2088]: W0910 00:09:15.450720 2088 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:09:15.452139 kubelet[2088]: I0910 00:09:15.451770 2088 server.go:1274] "Started kubelet" Sep 10 00:09:15.452291 kubelet[2088]: I0910 00:09:15.452244 2088 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:09:15.452806 kubelet[2088]: I0910 00:09:15.452756 2088 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:09:15.455269 kubelet[2088]: I0910 00:09:15.453585 2088 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:09:15.455938 kubelet[2088]: I0910 00:09:15.455791 2088 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:09:15.457033 kubelet[2088]: I0910 00:09:15.456446 2088 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:09:15.457033 kubelet[2088]: I0910 00:09:15.456808 2088 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:09:15.457772 kubelet[2088]: E0910 00:09:15.455135 2088 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c33f1d9bfa8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:09:15.451742862 +0000 UTC m=+0.747316330,LastTimestamp:2025-09-10 00:09:15.451742862 +0000 UTC m=+0.747316330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:09:15.457772 kubelet[2088]: I0910 00:09:15.457443 2088 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:09:15.457772 kubelet[2088]: I0910 00:09:15.457559 2088 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:09:15.457772 kubelet[2088]: I0910 00:09:15.457611 2088 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:09:15.458990 kubelet[2088]: W0910 00:09:15.457974 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:15.458990 kubelet[2088]: E0910 00:09:15.458029 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:15.458990 kubelet[2088]: E0910 00:09:15.458739 2088 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:09:15.460774 kubelet[2088]: E0910 00:09:15.459974 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Sep 10 00:09:15.460774 kubelet[2088]: E0910 00:09:15.460091 2088 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:09:15.462326 kubelet[2088]: I0910 00:09:15.461223 2088 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:09:15.462326 kubelet[2088]: I0910 00:09:15.461241 2088 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:09:15.462326 kubelet[2088]: I0910 00:09:15.461331 2088 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:09:15.481264 kubelet[2088]: I0910 00:09:15.481233 2088 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:09:15.481264 kubelet[2088]: I0910 00:09:15.481256 2088 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:09:15.481397 kubelet[2088]: I0910 00:09:15.481277 2088 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:09:15.481587 kubelet[2088]: I0910 00:09:15.481519 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:09:15.482729 kubelet[2088]: I0910 00:09:15.482694 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:09:15.482729 kubelet[2088]: I0910 00:09:15.482727 2088 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:09:15.483847 kubelet[2088]: I0910 00:09:15.482744 2088 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:09:15.483847 kubelet[2088]: E0910 00:09:15.482793 2088 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:09:15.483847 kubelet[2088]: W0910 00:09:15.483240 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:15.483847 kubelet[2088]: E0910 00:09:15.483279 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:15.559028 kubelet[2088]: E0910 00:09:15.558954 2088 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:09:15.569403 kubelet[2088]: I0910 00:09:15.569360 2088 policy_none.go:49] "None policy: Start" Sep 10 00:09:15.570250 kubelet[2088]: I0910 00:09:15.570206 2088 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:09:15.570250 kubelet[2088]: I0910 00:09:15.570237 2088 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:09:15.576790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:09:15.583603 kubelet[2088]: E0910 00:09:15.583553 2088 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:09:15.589225 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:09:15.592715 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:09:15.603994 kubelet[2088]: I0910 00:09:15.603960 2088 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:09:15.604239 kubelet[2088]: I0910 00:09:15.604195 2088 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:09:15.604239 kubelet[2088]: I0910 00:09:15.604219 2088 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:09:15.605064 kubelet[2088]: I0910 00:09:15.604493 2088 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:09:15.605800 kubelet[2088]: E0910 00:09:15.605758 2088 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:09:15.661132 kubelet[2088]: E0910 00:09:15.660973 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Sep 10 00:09:15.706298 kubelet[2088]: I0910 00:09:15.706250 2088 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:09:15.706784 kubelet[2088]: E0910 00:09:15.706757 2088 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Sep 10 00:09:15.796412 systemd[1]: Created slice kubepods-burstable-pod6e0664447e2130d6a9bd348c77837f9b.slice - libcontainer container kubepods-burstable-pod6e0664447e2130d6a9bd348c77837f9b.slice. Sep 10 00:09:15.823736 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 10 00:09:15.836992 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 10 00:09:15.859544 kubelet[2088]: I0910 00:09:15.859246 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:15.859544 kubelet[2088]: I0910 00:09:15.859283 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:15.859544 kubelet[2088]: I0910 00:09:15.859334 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:15.859544 kubelet[2088]: I0910 00:09:15.859384 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:15.859544 kubelet[2088]: I0910 00:09:15.859405 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:15.859990 kubelet[2088]: I0910 00:09:15.859423 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:15.859990 kubelet[2088]: I0910 00:09:15.859441 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:09:15.859990 kubelet[2088]: I0910 00:09:15.859464 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:15.859990 kubelet[2088]: I0910 00:09:15.859508 2088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:15.908977 kubelet[2088]: I0910 00:09:15.908900 2088 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:09:15.909317 kubelet[2088]: E0910 00:09:15.909282 2088 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Sep 10 00:09:16.062341 kubelet[2088]: E0910 00:09:16.062208 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Sep 10 00:09:16.121980 kubelet[2088]: E0910 00:09:16.121921 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.122683 containerd[1439]: time="2025-09-10T00:09:16.122643983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e0664447e2130d6a9bd348c77837f9b,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:16.135416 kubelet[2088]: E0910 00:09:16.135326 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.135811 containerd[1439]: time="2025-09-10T00:09:16.135776901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:16.139553 kubelet[2088]: E0910 00:09:16.139261 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.140414 containerd[1439]: time="2025-09-10T00:09:16.139699970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:16.311531 kubelet[2088]: I0910 00:09:16.311383 2088 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:09:16.312149 kubelet[2088]: E0910 00:09:16.312118 2088 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Sep 10 00:09:16.631707 kubelet[2088]: W0910 00:09:16.631269 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:16.631707 kubelet[2088]: E0910 00:09:16.631365 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:16.632781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715650288.mount: Deactivated successfully. Sep 10 00:09:16.640348 containerd[1439]: time="2025-09-10T00:09:16.639758515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:09:16.641298 containerd[1439]: time="2025-09-10T00:09:16.641209063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:09:16.644276 containerd[1439]: time="2025-09-10T00:09:16.643775262Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:09:16.645206 containerd[1439]: time="2025-09-10T00:09:16.645159462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:09:16.646370 containerd[1439]: time="2025-09-10T00:09:16.646308208Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:09:16.647090 containerd[1439]: time="2025-09-10T00:09:16.647009131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 10 00:09:16.647090 containerd[1439]: time="2025-09-10T00:09:16.647217096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:09:16.650124 containerd[1439]: time="2025-09-10T00:09:16.649238074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:09:16.652758 containerd[1439]: time="2025-09-10T00:09:16.652228525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.363388ms" Sep 10 00:09:16.654061 containerd[1439]: time="2025-09-10T00:09:16.654002964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.271346ms" Sep 10 00:09:16.656177 containerd[1439]: time="2025-09-10T00:09:16.656124663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.339059ms" Sep 10 00:09:16.771487 containerd[1439]: time="2025-09-10T00:09:16.771378417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:16.771487 containerd[1439]: time="2025-09-10T00:09:16.771443443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:16.772424 containerd[1439]: time="2025-09-10T00:09:16.772193787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:16.772424 containerd[1439]: time="2025-09-10T00:09:16.772245127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:16.772424 containerd[1439]: time="2025-09-10T00:09:16.772259853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.772424 containerd[1439]: time="2025-09-10T00:09:16.772381383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.773493 containerd[1439]: time="2025-09-10T00:09:16.773231527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:16.773493 containerd[1439]: time="2025-09-10T00:09:16.773320603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:16.773493 containerd[1439]: time="2025-09-10T00:09:16.773336449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.773753 containerd[1439]: time="2025-09-10T00:09:16.773620765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.773801 containerd[1439]: time="2025-09-10T00:09:16.773627887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.774490 containerd[1439]: time="2025-09-10T00:09:16.774412965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:16.795155 kubelet[2088]: W0910 00:09:16.795096 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:16.795275 kubelet[2088]: E0910 00:09:16.795163 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:16.801289 systemd[1]: Started cri-containerd-10f6454216c5f477aad4151078d0b4c3e300f05f2814c9c577d86cfb471f6855.scope - libcontainer container 10f6454216c5f477aad4151078d0b4c3e300f05f2814c9c577d86cfb471f6855. Sep 10 00:09:16.802881 systemd[1]: Started cri-containerd-566ea4626adceb1c43b7e834721bbe6889f2cc2ae2df41bca456ad8eb6ddc495.scope - libcontainer container 566ea4626adceb1c43b7e834721bbe6889f2cc2ae2df41bca456ad8eb6ddc495. Sep 10 00:09:16.804219 systemd[1]: Started cri-containerd-6aafb9d90c1c0f6bd7318b956a760890d127eae37c50e6ecfcb2da62405afb0f.scope - libcontainer container 6aafb9d90c1c0f6bd7318b956a760890d127eae37c50e6ecfcb2da62405afb0f. Sep 10 00:09:16.847304 containerd[1439]: time="2025-09-10T00:09:16.846434051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e0664447e2130d6a9bd348c77837f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f6454216c5f477aad4151078d0b4c3e300f05f2814c9c577d86cfb471f6855\"" Sep 10 00:09:16.847862 kubelet[2088]: E0910 00:09:16.847812 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.849422 containerd[1439]: time="2025-09-10T00:09:16.849389528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aafb9d90c1c0f6bd7318b956a760890d127eae37c50e6ecfcb2da62405afb0f\"" Sep 10 00:09:16.850458 kubelet[2088]: E0910 00:09:16.850432 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.851119 containerd[1439]: time="2025-09-10T00:09:16.850404539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"566ea4626adceb1c43b7e834721bbe6889f2cc2ae2df41bca456ad8eb6ddc495\"" Sep 10 00:09:16.852582 kubelet[2088]: E0910 00:09:16.852555 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:16.854441 containerd[1439]: time="2025-09-10T00:09:16.854408961Z" level=info msg="CreateContainer within sandbox \"566ea4626adceb1c43b7e834721bbe6889f2cc2ae2df41bca456ad8eb6ddc495\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:09:16.854673 containerd[1439]: time="2025-09-10T00:09:16.854486552Z" level=info msg="CreateContainer within sandbox \"6aafb9d90c1c0f6bd7318b956a760890d127eae37c50e6ecfcb2da62405afb0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:09:16.855571 containerd[1439]: time="2025-09-10T00:09:16.855262226Z" level=info msg="CreateContainer within sandbox \"10f6454216c5f477aad4151078d0b4c3e300f05f2814c9c577d86cfb471f6855\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:09:16.863037 kubelet[2088]: E0910 00:09:16.862981 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Sep 10 00:09:16.871724 containerd[1439]: time="2025-09-10T00:09:16.871654305Z" level=info msg="CreateContainer within sandbox \"566ea4626adceb1c43b7e834721bbe6889f2cc2ae2df41bca456ad8eb6ddc495\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67c72e2555e8a78f0f4353c6b6d14414eab02991242b348ce1bb4ad0f0266ff5\"" Sep 10 00:09:16.872492 containerd[1439]: time="2025-09-10T00:09:16.872426977Z" level=info msg="StartContainer for \"67c72e2555e8a78f0f4353c6b6d14414eab02991242b348ce1bb4ad0f0266ff5\"" Sep 10 00:09:16.877345 containerd[1439]: time="2025-09-10T00:09:16.877287706Z" level=info msg="CreateContainer within sandbox \"6aafb9d90c1c0f6bd7318b956a760890d127eae37c50e6ecfcb2da62405afb0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25a0c7ac71dde6d5e0f811a6d973975938d49f3e5e58fbcb0a407861781cd75f\"" Sep 10 00:09:16.877615 containerd[1439]: time="2025-09-10T00:09:16.877584466Z" level=info msg="CreateContainer within sandbox \"10f6454216c5f477aad4151078d0b4c3e300f05f2814c9c577d86cfb471f6855\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"166738e8aa75c260b12836964acf8efa958187076f967df9864a82960a136e72\"" Sep 10 00:09:16.878181 containerd[1439]: time="2025-09-10T00:09:16.877802314Z" level=info msg="StartContainer for \"25a0c7ac71dde6d5e0f811a6d973975938d49f3e5e58fbcb0a407861781cd75f\"" Sep 10 00:09:16.878484 containerd[1439]: time="2025-09-10T00:09:16.878459540Z" level=info msg="StartContainer for \"166738e8aa75c260b12836964acf8efa958187076f967df9864a82960a136e72\"" Sep 10 00:09:16.891676 kubelet[2088]: W0910 00:09:16.891556 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:16.891676 kubelet[2088]: E0910 00:09:16.891623 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:16.906251 systemd[1]: Started cri-containerd-67c72e2555e8a78f0f4353c6b6d14414eab02991242b348ce1bb4ad0f0266ff5.scope - libcontainer container 67c72e2555e8a78f0f4353c6b6d14414eab02991242b348ce1bb4ad0f0266ff5. Sep 10 00:09:16.909500 systemd[1]: Started cri-containerd-166738e8aa75c260b12836964acf8efa958187076f967df9864a82960a136e72.scope - libcontainer container 166738e8aa75c260b12836964acf8efa958187076f967df9864a82960a136e72. Sep 10 00:09:16.910407 systemd[1]: Started cri-containerd-25a0c7ac71dde6d5e0f811a6d973975938d49f3e5e58fbcb0a407861781cd75f.scope - libcontainer container 25a0c7ac71dde6d5e0f811a6d973975938d49f3e5e58fbcb0a407861781cd75f. Sep 10 00:09:16.927648 kubelet[2088]: W0910 00:09:16.927559 2088 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Sep 10 00:09:16.927648 kubelet[2088]: E0910 00:09:16.927641 2088 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:09:16.954903 containerd[1439]: time="2025-09-10T00:09:16.954022541Z" level=info msg="StartContainer for \"67c72e2555e8a78f0f4353c6b6d14414eab02991242b348ce1bb4ad0f0266ff5\" returns successfully" Sep 10 00:09:16.954903 containerd[1439]: time="2025-09-10T00:09:16.954222142Z" level=info msg="StartContainer for \"166738e8aa75c260b12836964acf8efa958187076f967df9864a82960a136e72\" returns successfully" Sep 10 00:09:16.954903 containerd[1439]: time="2025-09-10T00:09:16.954252274Z" level=info msg="StartContainer for \"25a0c7ac71dde6d5e0f811a6d973975938d49f3e5e58fbcb0a407861781cd75f\" returns successfully" Sep 10 00:09:17.116054 kubelet[2088]: I0910 00:09:17.114105 2088 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:09:17.492469 kubelet[2088]: E0910 00:09:17.491911 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:17.493218 kubelet[2088]: E0910 00:09:17.493198 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:17.498752 kubelet[2088]: E0910 00:09:17.498721 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:18.498985 kubelet[2088]: E0910 00:09:18.498893 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:19.177096 kubelet[2088]: E0910 00:09:19.177053 2088 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:09:19.240443 kubelet[2088]: I0910 00:09:19.237579 2088 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:09:19.446324 kubelet[2088]: I0910 00:09:19.446219 2088 apiserver.go:52] "Watching apiserver" Sep 10 00:09:19.457923 kubelet[2088]: I0910 00:09:19.457897 2088 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:09:19.503895 kubelet[2088]: E0910 00:09:19.503628 2088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:19.503895 kubelet[2088]: E0910 00:09:19.503792 2088 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:21.145491 systemd[1]: Reloading requested from client PID 2369 ('systemctl') (unit session-7.scope)... Sep 10 00:09:21.145507 systemd[1]: Reloading... Sep 10 00:09:21.210169 zram_generator::config[2412]: No configuration found. Sep 10 00:09:21.329489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:09:21.394773 systemd[1]: Reloading finished in 248 ms. Sep 10 00:09:21.426896 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:21.448940 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:09:21.449205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:21.449258 systemd[1]: kubelet.service: Consumed 1.124s CPU time, 128.5M memory peak, 0B memory swap peak. Sep 10 00:09:21.458367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:09:21.558918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:09:21.562612 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:09:21.601265 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:09:21.601265 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:09:21.601265 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:09:21.601720 kubelet[2450]: I0910 00:09:21.601316 2450 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:09:21.606783 kubelet[2450]: I0910 00:09:21.606738 2450 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:09:21.606783 kubelet[2450]: I0910 00:09:21.606767 2450 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:09:21.606998 kubelet[2450]: I0910 00:09:21.606981 2450 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:09:21.608290 kubelet[2450]: I0910 00:09:21.608265 2450 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:09:21.610178 kubelet[2450]: I0910 00:09:21.610150 2450 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:09:21.613837 kubelet[2450]: E0910 00:09:21.613800 2450 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:09:21.613837 kubelet[2450]: I0910 00:09:21.613833 2450 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:09:21.616089 kubelet[2450]: I0910 00:09:21.616070 2450 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:09:21.616191 kubelet[2450]: I0910 00:09:21.616179 2450 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:09:21.616293 kubelet[2450]: I0910 00:09:21.616272 2450 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:09:21.616442 kubelet[2450]: I0910 00:09:21.616296 2450 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:09:21.616532 kubelet[2450]: I0910 00:09:21.616449 2450 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:09:21.616532 kubelet[2450]: I0910 00:09:21.616470 2450 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:09:21.616532 kubelet[2450]: I0910 00:09:21.616504 2450 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:09:21.616592 kubelet[2450]: I0910 00:09:21.616580 2450 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:09:21.616592 kubelet[2450]: I0910 00:09:21.616591 2450 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:09:21.616632 kubelet[2450]: I0910 00:09:21.616607 2450 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:09:21.616632 kubelet[2450]: I0910 00:09:21.616619 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:09:21.617690 kubelet[2450]: I0910 00:09:21.617652 2450 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:09:21.620064 kubelet[2450]: I0910 00:09:21.618172 2450 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:09:21.620064 kubelet[2450]: I0910 00:09:21.618753 2450 server.go:1274] "Started kubelet" Sep 10 00:09:21.620064 kubelet[2450]: I0910 00:09:21.619331 2450 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:09:21.621190 kubelet[2450]: I0910 00:09:21.621101 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:09:21.621306 kubelet[2450]: I0910 00:09:21.621281 2450 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.623112 2450 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.621400 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.621374 2450 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.624781 2450 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.625087 2450 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:09:21.625562 kubelet[2450]: I0910 00:09:21.625202 2450 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:09:21.626140 kubelet[2450]: E0910 00:09:21.625809 2450 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:09:21.630550 kubelet[2450]: I0910 00:09:21.630515 2450 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:09:21.630771 kubelet[2450]: I0910 00:09:21.630748 2450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:09:21.636625 kubelet[2450]: I0910 00:09:21.636590 2450 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:09:21.639196 kubelet[2450]: E0910 00:09:21.639169 2450 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:09:21.640606 kubelet[2450]: I0910 00:09:21.640559 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:09:21.641608 kubelet[2450]: I0910 00:09:21.641573 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:09:21.641608 kubelet[2450]: I0910 00:09:21.641598 2450 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:09:21.641608 kubelet[2450]: I0910 00:09:21.641615 2450 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:09:21.641742 kubelet[2450]: E0910 00:09:21.641661 2450 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:09:21.675556 kubelet[2450]: I0910 00:09:21.675521 2450 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:09:21.675556 kubelet[2450]: I0910 00:09:21.675541 2450 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:09:21.675556 kubelet[2450]: I0910 00:09:21.675563 2450 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:09:21.675722 kubelet[2450]: I0910 00:09:21.675714 2450 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:09:21.675745 kubelet[2450]: I0910 00:09:21.675725 2450 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:09:21.675745 kubelet[2450]: I0910 00:09:21.675744 2450 policy_none.go:49] "None policy: Start" Sep 10 00:09:21.677131 kubelet[2450]: I0910 00:09:21.676256 2450 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:09:21.677131 kubelet[2450]: I0910 00:09:21.676284 2450 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:09:21.677131 kubelet[2450]: I0910 00:09:21.676430 2450 state_mem.go:75] "Updated machine memory state" Sep 10 00:09:21.682564 kubelet[2450]: I0910 00:09:21.682538 2450 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:09:21.682767 kubelet[2450]: I0910 00:09:21.682691 2450 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:09:21.682767 kubelet[2450]: I0910 00:09:21.682703 2450 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:09:21.683859 kubelet[2450]: I0910 00:09:21.682989 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:09:21.786613 kubelet[2450]: I0910 00:09:21.786578 2450 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:09:21.792361 kubelet[2450]: I0910 00:09:21.792330 2450 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:09:21.792464 kubelet[2450]: I0910 00:09:21.792409 2450 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:09:21.826394 kubelet[2450]: I0910 00:09:21.826342 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:21.826394 kubelet[2450]: I0910 00:09:21.826388 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:21.826565 kubelet[2450]: I0910 00:09:21.826410 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:21.826565 kubelet[2450]: I0910 00:09:21.826430 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:21.826565 kubelet[2450]: I0910 00:09:21.826448 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:09:21.826565 kubelet[2450]: I0910 00:09:21.826462 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e0664447e2130d6a9bd348c77837f9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e0664447e2130d6a9bd348c77837f9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:09:21.826565 kubelet[2450]: I0910 00:09:21.826476 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:21.826680 kubelet[2450]: I0910 00:09:21.826491 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:21.826680 kubelet[2450]: I0910 00:09:21.826515 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:09:22.047524 kubelet[2450]: E0910 00:09:22.047400 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.048380 kubelet[2450]: E0910 00:09:22.048348 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.048482 kubelet[2450]: E0910 00:09:22.048456 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.141947 sudo[2486]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:09:22.142266 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 00:09:22.570672 sudo[2486]: pam_unix(sudo:session): session closed for user root Sep 10 00:09:22.617232 kubelet[2450]: I0910 00:09:22.617185 2450 apiserver.go:52] "Watching apiserver" Sep 10 00:09:22.625699 kubelet[2450]: I0910 00:09:22.625658 2450 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:09:22.661081 kubelet[2450]: E0910 00:09:22.661027 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.661228 kubelet[2450]: E0910 00:09:22.661214 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.662243 kubelet[2450]: E0910 00:09:22.662218 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:22.681565 kubelet[2450]: I0910 00:09:22.681369 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.681331833 podStartE2EDuration="1.681331833s" podCreationTimestamp="2025-09-10 00:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:22.681291185 +0000 UTC m=+1.115781423" watchObservedRunningTime="2025-09-10 00:09:22.681331833 +0000 UTC m=+1.115822071" Sep 10 00:09:22.700065 kubelet[2450]: I0910 00:09:22.698224 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.69820434 podStartE2EDuration="1.69820434s" podCreationTimestamp="2025-09-10 00:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:22.690195164 +0000 UTC m=+1.124685402" watchObservedRunningTime="2025-09-10 00:09:22.69820434 +0000 UTC m=+1.132694578" Sep 10 00:09:22.716184 kubelet[2450]: I0910 00:09:22.716120 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.716100113 podStartE2EDuration="1.716100113s" podCreationTimestamp="2025-09-10 00:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:22.700226708 +0000 UTC m=+1.134716946" watchObservedRunningTime="2025-09-10 00:09:22.716100113 +0000 UTC m=+1.150590351" Sep 10 00:09:23.661896 kubelet[2450]: E0910 00:09:23.661838 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:23.662490 kubelet[2450]: E0910 00:09:23.662455 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:24.021143 sudo[1611]: pam_unix(sudo:session): session closed for user root Sep 10 00:09:24.022741 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:24.026110 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:46616.service: Deactivated successfully. Sep 10 00:09:24.027911 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:09:24.028869 systemd[1]: session-7.scope: Consumed 7.375s CPU time, 149.1M memory peak, 0B memory swap peak. Sep 10 00:09:24.029517 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:09:24.030463 systemd-logind[1418]: Removed session 7. Sep 10 00:09:25.511860 kubelet[2450]: E0910 00:09:25.511828 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:26.630595 kubelet[2450]: I0910 00:09:26.630568 2450 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:09:26.630945 containerd[1439]: time="2025-09-10T00:09:26.630873471Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:09:26.631177 kubelet[2450]: I0910 00:09:26.631055 2450 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:09:27.598998 systemd[1]: Created slice kubepods-besteffort-pod19f070d2_35fc_4fdd_b857_8ff46f15717d.slice - libcontainer container kubepods-besteffort-pod19f070d2_35fc_4fdd_b857_8ff46f15717d.slice. Sep 10 00:09:27.614856 systemd[1]: Created slice kubepods-burstable-poded374fb3_8398_4da3_9ad3_df1de07b0c9d.slice - libcontainer container kubepods-burstable-poded374fb3_8398_4da3_9ad3_df1de07b0c9d.slice. Sep 10 00:09:27.667653 kubelet[2450]: I0910 00:09:27.667510 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-clustermesh-secrets\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.667653 kubelet[2450]: I0910 00:09:27.667552 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh5d5\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-kube-api-access-nh5d5\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.667653 kubelet[2450]: I0910 00:09:27.667574 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-bpf-maps\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.667653 kubelet[2450]: I0910 00:09:27.667590 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-kernel\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.667653 kubelet[2450]: I0910 00:09:27.667628 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19f070d2-35fc-4fdd-b857-8ff46f15717d-xtables-lock\") pod \"kube-proxy-n5lbz\" (UID: \"19f070d2-35fc-4fdd-b857-8ff46f15717d\") " pod="kube-system/kube-proxy-n5lbz" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667673 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ckk\" (UniqueName: \"kubernetes.io/projected/19f070d2-35fc-4fdd-b857-8ff46f15717d-kube-api-access-q2ckk\") pod \"kube-proxy-n5lbz\" (UID: \"19f070d2-35fc-4fdd-b857-8ff46f15717d\") " pod="kube-system/kube-proxy-n5lbz" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667715 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hostproc\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667742 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-lib-modules\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667768 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-config-path\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667795 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-etc-cni-netd\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668388 kubelet[2450]: I0910 00:09:27.667809 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19f070d2-35fc-4fdd-b857-8ff46f15717d-kube-proxy\") pod \"kube-proxy-n5lbz\" (UID: \"19f070d2-35fc-4fdd-b857-8ff46f15717d\") " pod="kube-system/kube-proxy-n5lbz" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667823 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cni-path\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667841 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hubble-tls\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667857 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-cgroup\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667872 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-net\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667887 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19f070d2-35fc-4fdd-b857-8ff46f15717d-lib-modules\") pod \"kube-proxy-n5lbz\" (UID: \"19f070d2-35fc-4fdd-b857-8ff46f15717d\") " pod="kube-system/kube-proxy-n5lbz" Sep 10 00:09:27.668525 kubelet[2450]: I0910 00:09:27.667906 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-run\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.668643 kubelet[2450]: I0910 00:09:27.667920 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-xtables-lock\") pod \"cilium-bztq2\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " pod="kube-system/cilium-bztq2" Sep 10 00:09:27.741720 systemd[1]: Created slice kubepods-besteffort-pod64c2349a_5daa_47dd_b860_8667a549ed85.slice - libcontainer container kubepods-besteffort-pod64c2349a_5daa_47dd_b860_8667a549ed85.slice. Sep 10 00:09:27.768635 kubelet[2450]: I0910 00:09:27.768221 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64c2349a-5daa-47dd-b860-8667a549ed85-cilium-config-path\") pod \"cilium-operator-5d85765b45-pwmb7\" (UID: \"64c2349a-5daa-47dd-b860-8667a549ed85\") " pod="kube-system/cilium-operator-5d85765b45-pwmb7" Sep 10 00:09:27.768635 kubelet[2450]: I0910 00:09:27.768282 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88s9\" (UniqueName: \"kubernetes.io/projected/64c2349a-5daa-47dd-b860-8667a549ed85-kube-api-access-l88s9\") pod \"cilium-operator-5d85765b45-pwmb7\" (UID: \"64c2349a-5daa-47dd-b860-8667a549ed85\") " pod="kube-system/cilium-operator-5d85765b45-pwmb7" Sep 10 00:09:27.911671 kubelet[2450]: E0910 00:09:27.910816 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:27.912240 containerd[1439]: time="2025-09-10T00:09:27.912094542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5lbz,Uid:19f070d2-35fc-4fdd-b857-8ff46f15717d,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:27.916951 kubelet[2450]: E0910 00:09:27.916866 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:27.918554 containerd[1439]: time="2025-09-10T00:09:27.918485462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bztq2,Uid:ed374fb3-8398-4da3-9ad3-df1de07b0c9d,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:27.937172 containerd[1439]: time="2025-09-10T00:09:27.936760862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:27.937172 containerd[1439]: time="2025-09-10T00:09:27.936818667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:27.937172 containerd[1439]: time="2025-09-10T00:09:27.936833988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:27.937172 containerd[1439]: time="2025-09-10T00:09:27.937021324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:27.954073 containerd[1439]: time="2025-09-10T00:09:27.953867119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:27.954189 containerd[1439]: time="2025-09-10T00:09:27.953971488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:27.954189 containerd[1439]: time="2025-09-10T00:09:27.954138983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:27.954431 containerd[1439]: time="2025-09-10T00:09:27.954397086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:27.958261 systemd[1]: Started cri-containerd-e5edf2faffb2843c779d86d660c0d802bf2cb87660f19480e3cd96d2a908ebc2.scope - libcontainer container e5edf2faffb2843c779d86d660c0d802bf2cb87660f19480e3cd96d2a908ebc2. Sep 10 00:09:27.976240 systemd[1]: Started cri-containerd-dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a.scope - libcontainer container dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a. Sep 10 00:09:27.997765 containerd[1439]: time="2025-09-10T00:09:27.997636551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5lbz,Uid:19f070d2-35fc-4fdd-b857-8ff46f15717d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5edf2faffb2843c779d86d660c0d802bf2cb87660f19480e3cd96d2a908ebc2\"" Sep 10 00:09:27.998713 kubelet[2450]: E0910 00:09:27.998682 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:28.001169 containerd[1439]: time="2025-09-10T00:09:28.001032528Z" level=info msg="CreateContainer within sandbox \"e5edf2faffb2843c779d86d660c0d802bf2cb87660f19480e3cd96d2a908ebc2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:09:28.002966 containerd[1439]: time="2025-09-10T00:09:28.002933206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bztq2,Uid:ed374fb3-8398-4da3-9ad3-df1de07b0c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\"" Sep 10 00:09:28.003486 kubelet[2450]: E0910 00:09:28.003463 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:28.004669 containerd[1439]: time="2025-09-10T00:09:28.004628586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:09:28.022646 containerd[1439]: time="2025-09-10T00:09:28.022593193Z" level=info msg="CreateContainer within sandbox \"e5edf2faffb2843c779d86d660c0d802bf2cb87660f19480e3cd96d2a908ebc2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca0e20410ae87fd75cd93626594b3ebdec9bda1666ce597b27d4e9c88364a562\"" Sep 10 00:09:28.023364 containerd[1439]: time="2025-09-10T00:09:28.023327694Z" level=info msg="StartContainer for \"ca0e20410ae87fd75cd93626594b3ebdec9bda1666ce597b27d4e9c88364a562\"" Sep 10 00:09:28.045182 kubelet[2450]: E0910 00:09:28.045145 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:28.046144 containerd[1439]: time="2025-09-10T00:09:28.046103220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pwmb7,Uid:64c2349a-5daa-47dd-b860-8667a549ed85,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:28.053218 systemd[1]: Started cri-containerd-ca0e20410ae87fd75cd93626594b3ebdec9bda1666ce597b27d4e9c88364a562.scope - libcontainer container ca0e20410ae87fd75cd93626594b3ebdec9bda1666ce597b27d4e9c88364a562. Sep 10 00:09:28.084319 containerd[1439]: time="2025-09-10T00:09:28.084270500Z" level=info msg="StartContainer for \"ca0e20410ae87fd75cd93626594b3ebdec9bda1666ce597b27d4e9c88364a562\" returns successfully" Sep 10 00:09:28.084783 containerd[1439]: time="2025-09-10T00:09:28.084460675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:28.084904 containerd[1439]: time="2025-09-10T00:09:28.084524801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:28.084904 containerd[1439]: time="2025-09-10T00:09:28.084540162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:28.084904 containerd[1439]: time="2025-09-10T00:09:28.084619528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:28.104305 systemd[1]: Started cri-containerd-e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3.scope - libcontainer container e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3. Sep 10 00:09:28.139092 containerd[1439]: time="2025-09-10T00:09:28.139015152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pwmb7,Uid:64c2349a-5daa-47dd-b860-8667a549ed85,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\"" Sep 10 00:09:28.141075 kubelet[2450]: E0910 00:09:28.140532 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:28.672720 kubelet[2450]: E0910 00:09:28.672670 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:30.020848 kubelet[2450]: E0910 00:09:30.020482 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:30.045006 kubelet[2450]: I0910 00:09:30.044952 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n5lbz" podStartSLOduration=3.044933949 podStartE2EDuration="3.044933949s" podCreationTimestamp="2025-09-10 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:28.688256222 +0000 UTC m=+7.122746460" watchObservedRunningTime="2025-09-10 00:09:30.044933949 +0000 UTC m=+8.479424187" Sep 10 00:09:30.676154 kubelet[2450]: E0910 00:09:30.676111 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:33.426326 kubelet[2450]: E0910 00:09:33.426230 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:35.524292 kubelet[2450]: E0910 00:09:35.524256 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:35.684692 kubelet[2450]: E0910 00:09:35.684661 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:37.135957 update_engine[1420]: I20250910 00:09:37.135879 1420 update_attempter.cc:509] Updating boot flags... Sep 10 00:09:37.159159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2830) Sep 10 00:09:37.197081 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2834) Sep 10 00:09:42.230225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940592561.mount: Deactivated successfully. Sep 10 00:09:43.513241 containerd[1439]: time="2025-09-10T00:09:43.513178773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:43.513728 containerd[1439]: time="2025-09-10T00:09:43.513685873Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 00:09:43.514538 containerd[1439]: time="2025-09-10T00:09:43.514506465Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:43.516145 containerd[1439]: time="2025-09-10T00:09:43.516113807Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.511433537s" Sep 10 00:09:43.516205 containerd[1439]: time="2025-09-10T00:09:43.516150568Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 00:09:43.520198 containerd[1439]: time="2025-09-10T00:09:43.519987516Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:09:43.520981 containerd[1439]: time="2025-09-10T00:09:43.520797747Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:09:43.552717 containerd[1439]: time="2025-09-10T00:09:43.552671578Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\"" Sep 10 00:09:43.554100 containerd[1439]: time="2025-09-10T00:09:43.553316243Z" level=info msg="StartContainer for \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\"" Sep 10 00:09:43.582255 systemd[1]: Started cri-containerd-c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd.scope - libcontainer container c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd. Sep 10 00:09:43.601583 containerd[1439]: time="2025-09-10T00:09:43.601544544Z" level=info msg="StartContainer for \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\" returns successfully" Sep 10 00:09:43.615004 systemd[1]: cri-containerd-c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd.scope: Deactivated successfully. Sep 10 00:09:43.744610 kubelet[2450]: E0910 00:09:43.744530 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:43.804097 containerd[1439]: time="2025-09-10T00:09:43.801457020Z" level=info msg="shim disconnected" id=c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd namespace=k8s.io Sep 10 00:09:43.804097 containerd[1439]: time="2025-09-10T00:09:43.803999918Z" level=warning msg="cleaning up after shim disconnected" id=c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd namespace=k8s.io Sep 10 00:09:43.804097 containerd[1439]: time="2025-09-10T00:09:43.804012879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:09:44.548586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd-rootfs.mount: Deactivated successfully. Sep 10 00:09:44.607403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345058977.mount: Deactivated successfully. Sep 10 00:09:44.749448 kubelet[2450]: E0910 00:09:44.748261 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:44.751652 containerd[1439]: time="2025-09-10T00:09:44.751296812Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:09:44.771088 containerd[1439]: time="2025-09-10T00:09:44.770897095Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\"" Sep 10 00:09:44.772558 containerd[1439]: time="2025-09-10T00:09:44.772427071Z" level=info msg="StartContainer for \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\"" Sep 10 00:09:44.801400 systemd[1]: Started cri-containerd-8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3.scope - libcontainer container 8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3. Sep 10 00:09:44.833915 containerd[1439]: time="2025-09-10T00:09:44.833864659Z" level=info msg="StartContainer for \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\" returns successfully" Sep 10 00:09:44.847647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:09:44.847865 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:09:44.848178 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:09:44.855833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:09:44.856308 systemd[1]: cri-containerd-8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3.scope: Deactivated successfully. Sep 10 00:09:44.872803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:09:44.903941 containerd[1439]: time="2025-09-10T00:09:44.903789199Z" level=info msg="shim disconnected" id=8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3 namespace=k8s.io Sep 10 00:09:44.903941 containerd[1439]: time="2025-09-10T00:09:44.903923444Z" level=warning msg="cleaning up after shim disconnected" id=8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3 namespace=k8s.io Sep 10 00:09:44.903941 containerd[1439]: time="2025-09-10T00:09:44.903936205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:09:45.067941 containerd[1439]: time="2025-09-10T00:09:45.067816947Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:45.068406 containerd[1439]: time="2025-09-10T00:09:45.068257123Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 00:09:45.069265 containerd[1439]: time="2025-09-10T00:09:45.069213156Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:09:45.070758 containerd[1439]: time="2025-09-10T00:09:45.070724210Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.550692692s" Sep 10 00:09:45.071001 containerd[1439]: time="2025-09-10T00:09:45.070884255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 00:09:45.073275 containerd[1439]: time="2025-09-10T00:09:45.073148895Z" level=info msg="CreateContainer within sandbox \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:09:45.102176 containerd[1439]: time="2025-09-10T00:09:45.102114518Z" level=info msg="CreateContainer within sandbox \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\"" Sep 10 00:09:45.102664 containerd[1439]: time="2025-09-10T00:09:45.102639537Z" level=info msg="StartContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\"" Sep 10 00:09:45.128271 systemd[1]: Started cri-containerd-0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af.scope - libcontainer container 0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af. Sep 10 00:09:45.152646 containerd[1439]: time="2025-09-10T00:09:45.152604342Z" level=info msg="StartContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" returns successfully" Sep 10 00:09:45.549417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3-rootfs.mount: Deactivated successfully. Sep 10 00:09:45.751119 kubelet[2450]: E0910 00:09:45.751076 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:45.755777 kubelet[2450]: E0910 00:09:45.755599 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:45.757824 containerd[1439]: time="2025-09-10T00:09:45.757782676Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:09:45.762980 kubelet[2450]: I0910 00:09:45.762823 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pwmb7" podStartSLOduration=1.833280068 podStartE2EDuration="18.762803294s" podCreationTimestamp="2025-09-10 00:09:27 +0000 UTC" firstStartedPulling="2025-09-10 00:09:28.142264341 +0000 UTC m=+6.576754539" lastFinishedPulling="2025-09-10 00:09:45.071787527 +0000 UTC m=+23.506277765" observedRunningTime="2025-09-10 00:09:45.762296996 +0000 UTC m=+24.196787234" watchObservedRunningTime="2025-09-10 00:09:45.762803294 +0000 UTC m=+24.197293492" Sep 10 00:09:45.790451 containerd[1439]: time="2025-09-10T00:09:45.790325106Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\"" Sep 10 00:09:45.793013 containerd[1439]: time="2025-09-10T00:09:45.791326901Z" level=info msg="StartContainer for \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\"" Sep 10 00:09:45.826265 systemd[1]: Started cri-containerd-26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb.scope - libcontainer container 26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb. Sep 10 00:09:45.854399 containerd[1439]: time="2025-09-10T00:09:45.854345367Z" level=info msg="StartContainer for \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\" returns successfully" Sep 10 00:09:45.857134 systemd[1]: cri-containerd-26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb.scope: Deactivated successfully. Sep 10 00:09:45.927219 containerd[1439]: time="2025-09-10T00:09:45.927141538Z" level=info msg="shim disconnected" id=26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb namespace=k8s.io Sep 10 00:09:45.927219 containerd[1439]: time="2025-09-10T00:09:45.927207460Z" level=warning msg="cleaning up after shim disconnected" id=26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb namespace=k8s.io Sep 10 00:09:45.927219 containerd[1439]: time="2025-09-10T00:09:45.927216301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:09:46.548474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb-rootfs.mount: Deactivated successfully. Sep 10 00:09:46.758075 kubelet[2450]: E0910 00:09:46.757988 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:46.758075 kubelet[2450]: E0910 00:09:46.757989 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:46.760948 containerd[1439]: time="2025-09-10T00:09:46.760908381Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:09:46.776843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410064770.mount: Deactivated successfully. Sep 10 00:09:46.777973 containerd[1439]: time="2025-09-10T00:09:46.777940117Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\"" Sep 10 00:09:46.778427 containerd[1439]: time="2025-09-10T00:09:46.778403253Z" level=info msg="StartContainer for \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\"" Sep 10 00:09:46.806208 systemd[1]: Started cri-containerd-55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5.scope - libcontainer container 55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5. Sep 10 00:09:46.825026 systemd[1]: cri-containerd-55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5.scope: Deactivated successfully. Sep 10 00:09:46.825911 containerd[1439]: time="2025-09-10T00:09:46.825872579Z" level=info msg="StartContainer for \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\" returns successfully" Sep 10 00:09:46.851389 containerd[1439]: time="2025-09-10T00:09:46.851190956Z" level=info msg="shim disconnected" id=55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5 namespace=k8s.io Sep 10 00:09:46.851389 containerd[1439]: time="2025-09-10T00:09:46.851246037Z" level=warning msg="cleaning up after shim disconnected" id=55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5 namespace=k8s.io Sep 10 00:09:46.851389 containerd[1439]: time="2025-09-10T00:09:46.851254278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:09:47.548539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5-rootfs.mount: Deactivated successfully. Sep 10 00:09:47.762442 kubelet[2450]: E0910 00:09:47.761863 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:47.765294 containerd[1439]: time="2025-09-10T00:09:47.765250420Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:09:47.779518 containerd[1439]: time="2025-09-10T00:09:47.779288076Z" level=info msg="CreateContainer within sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\"" Sep 10 00:09:47.780167 containerd[1439]: time="2025-09-10T00:09:47.780137463Z" level=info msg="StartContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\"" Sep 10 00:09:47.812214 systemd[1]: Started cri-containerd-cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290.scope - libcontainer container cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290. Sep 10 00:09:47.839378 containerd[1439]: time="2025-09-10T00:09:47.839328864Z" level=info msg="StartContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" returns successfully" Sep 10 00:09:48.009386 kubelet[2450]: I0910 00:09:48.009342 2450 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:09:48.054778 systemd[1]: Created slice kubepods-burstable-pod8df6e08d_2a69_48ae_b32b_a505b5abdffb.slice - libcontainer container kubepods-burstable-pod8df6e08d_2a69_48ae_b32b_a505b5abdffb.slice. Sep 10 00:09:48.060507 systemd[1]: Created slice kubepods-burstable-pod61fcedba_157e_48be_bbb9_2522c83c3936.slice - libcontainer container kubepods-burstable-pod61fcedba_157e_48be_bbb9_2522c83c3936.slice. Sep 10 00:09:48.136759 kubelet[2450]: I0910 00:09:48.136641 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61fcedba-157e-48be-bbb9-2522c83c3936-config-volume\") pod \"coredns-7c65d6cfc9-6flq6\" (UID: \"61fcedba-157e-48be-bbb9-2522c83c3936\") " pod="kube-system/coredns-7c65d6cfc9-6flq6" Sep 10 00:09:48.136759 kubelet[2450]: I0910 00:09:48.136689 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5s5\" (UniqueName: \"kubernetes.io/projected/61fcedba-157e-48be-bbb9-2522c83c3936-kube-api-access-qd5s5\") pod \"coredns-7c65d6cfc9-6flq6\" (UID: \"61fcedba-157e-48be-bbb9-2522c83c3936\") " pod="kube-system/coredns-7c65d6cfc9-6flq6" Sep 10 00:09:48.136759 kubelet[2450]: I0910 00:09:48.136710 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df6e08d-2a69-48ae-b32b-a505b5abdffb-config-volume\") pod \"coredns-7c65d6cfc9-9jxm2\" (UID: \"8df6e08d-2a69-48ae-b32b-a505b5abdffb\") " pod="kube-system/coredns-7c65d6cfc9-9jxm2" Sep 10 00:09:48.136759 kubelet[2450]: I0910 00:09:48.136727 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68hs\" (UniqueName: \"kubernetes.io/projected/8df6e08d-2a69-48ae-b32b-a505b5abdffb-kube-api-access-x68hs\") pod \"coredns-7c65d6cfc9-9jxm2\" (UID: \"8df6e08d-2a69-48ae-b32b-a505b5abdffb\") " pod="kube-system/coredns-7c65d6cfc9-9jxm2" Sep 10 00:09:48.358684 kubelet[2450]: E0910 00:09:48.358633 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:48.359672 containerd[1439]: time="2025-09-10T00:09:48.359628913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jxm2,Uid:8df6e08d-2a69-48ae-b32b-a505b5abdffb,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:48.363538 kubelet[2450]: E0910 00:09:48.363504 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:48.364006 containerd[1439]: time="2025-09-10T00:09:48.363976328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6flq6,Uid:61fcedba-157e-48be-bbb9-2522c83c3936,Namespace:kube-system,Attempt:0,}" Sep 10 00:09:48.766037 kubelet[2450]: E0910 00:09:48.766009 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:48.783283 kubelet[2450]: I0910 00:09:48.782824 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bztq2" podStartSLOduration=6.26765153 podStartE2EDuration="21.782809648s" podCreationTimestamp="2025-09-10 00:09:27 +0000 UTC" firstStartedPulling="2025-09-10 00:09:28.00383052 +0000 UTC m=+6.438320758" lastFinishedPulling="2025-09-10 00:09:43.518988638 +0000 UTC m=+21.953478876" observedRunningTime="2025-09-10 00:09:48.782725485 +0000 UTC m=+27.217215723" watchObservedRunningTime="2025-09-10 00:09:48.782809648 +0000 UTC m=+27.217299886" Sep 10 00:09:49.767713 kubelet[2450]: E0910 00:09:49.767649 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:49.855645 systemd-networkd[1370]: cilium_host: Link UP Sep 10 00:09:49.855767 systemd-networkd[1370]: cilium_net: Link UP Sep 10 00:09:49.855771 systemd-networkd[1370]: cilium_net: Gained carrier Sep 10 00:09:49.855896 systemd-networkd[1370]: cilium_host: Gained carrier Sep 10 00:09:49.856030 systemd-networkd[1370]: cilium_host: Gained IPv6LL Sep 10 00:09:49.939924 systemd-networkd[1370]: cilium_vxlan: Link UP Sep 10 00:09:49.939932 systemd-networkd[1370]: cilium_vxlan: Gained carrier Sep 10 00:09:50.202072 kernel: NET: Registered PF_ALG protocol family Sep 10 00:09:50.405242 systemd-networkd[1370]: cilium_net: Gained IPv6LL Sep 10 00:09:50.770636 kubelet[2450]: E0910 00:09:50.770542 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:50.792442 systemd-networkd[1370]: lxc_health: Link UP Sep 10 00:09:50.809144 systemd-networkd[1370]: lxc_health: Gained carrier Sep 10 00:09:50.918527 systemd-networkd[1370]: lxcdbb41f27f7a6: Link UP Sep 10 00:09:50.929165 kernel: eth0: renamed from tmpeb392 Sep 10 00:09:50.947084 kernel: eth0: renamed from tmp33a1d Sep 10 00:09:50.954286 systemd-networkd[1370]: tmp33a1d: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:09:50.954370 systemd-networkd[1370]: tmp33a1d: Cannot enable IPv6, ignoring: No such file or directory Sep 10 00:09:50.954400 systemd-networkd[1370]: tmp33a1d: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Sep 10 00:09:50.954411 systemd-networkd[1370]: tmp33a1d: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Sep 10 00:09:50.954420 systemd-networkd[1370]: tmp33a1d: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Sep 10 00:09:50.954433 systemd-networkd[1370]: tmp33a1d: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Sep 10 00:09:50.954945 systemd-networkd[1370]: lxcdbb41f27f7a6: Gained carrier Sep 10 00:09:50.955128 systemd-networkd[1370]: lxcd92a96922f1e: Link UP Sep 10 00:09:50.955352 systemd-networkd[1370]: lxcd92a96922f1e: Gained carrier Sep 10 00:09:51.813236 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Sep 10 00:09:51.927158 kubelet[2450]: E0910 00:09:51.926739 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:52.453187 systemd-networkd[1370]: lxcd92a96922f1e: Gained IPv6LL Sep 10 00:09:52.581204 systemd-networkd[1370]: lxcdbb41f27f7a6: Gained IPv6LL Sep 10 00:09:52.773451 kubelet[2450]: E0910 00:09:52.773346 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:52.774181 systemd-networkd[1370]: lxc_health: Gained IPv6LL Sep 10 00:09:53.707646 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). Sep 10 00:09:53.744593 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:09:53.745729 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:53.749279 systemd-logind[1418]: New session 8 of user core. Sep 10 00:09:53.755163 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:09:53.774985 kubelet[2450]: E0910 00:09:53.774956 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:53.877341 sshd[3698]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:53.881283 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:36792.service: Deactivated successfully. Sep 10 00:09:53.882990 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:09:53.883608 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:09:53.884316 systemd-logind[1418]: Removed session 8. Sep 10 00:09:54.582024 containerd[1439]: time="2025-09-10T00:09:54.581878576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:54.582024 containerd[1439]: time="2025-09-10T00:09:54.581939817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:54.582024 containerd[1439]: time="2025-09-10T00:09:54.581955298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:54.582622 containerd[1439]: time="2025-09-10T00:09:54.582516832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:54.589916 containerd[1439]: time="2025-09-10T00:09:54.589816933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:09:54.590882 containerd[1439]: time="2025-09-10T00:09:54.590348866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:09:54.591388 containerd[1439]: time="2025-09-10T00:09:54.590415948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:54.591388 containerd[1439]: time="2025-09-10T00:09:54.590513110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:09:54.607225 systemd[1]: Started cri-containerd-eb3923083bbd73333f22af02bcaac50fabb3365a0bc1c8941f15ab7c0c47514d.scope - libcontainer container eb3923083bbd73333f22af02bcaac50fabb3365a0bc1c8941f15ab7c0c47514d. Sep 10 00:09:54.612217 systemd[1]: Started cri-containerd-33a1df9dd352ae3f08e3cf55d5dc37d147540a71c6badbd38153f90b80048b12.scope - libcontainer container 33a1df9dd352ae3f08e3cf55d5dc37d147540a71c6badbd38153f90b80048b12. Sep 10 00:09:54.620416 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:09:54.624186 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:09:54.636858 containerd[1439]: time="2025-09-10T00:09:54.636818221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jxm2,Uid:8df6e08d-2a69-48ae-b32b-a505b5abdffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3923083bbd73333f22af02bcaac50fabb3365a0bc1c8941f15ab7c0c47514d\"" Sep 10 00:09:54.638905 kubelet[2450]: E0910 00:09:54.638871 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:54.642723 containerd[1439]: time="2025-09-10T00:09:54.642650925Z" level=info msg="CreateContainer within sandbox \"eb3923083bbd73333f22af02bcaac50fabb3365a0bc1c8941f15ab7c0c47514d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:09:54.648518 containerd[1439]: time="2025-09-10T00:09:54.648487510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6flq6,Uid:61fcedba-157e-48be-bbb9-2522c83c3936,Namespace:kube-system,Attempt:0,} returns sandbox id \"33a1df9dd352ae3f08e3cf55d5dc37d147540a71c6badbd38153f90b80048b12\"" Sep 10 00:09:54.649113 kubelet[2450]: E0910 00:09:54.649093 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:54.650824 containerd[1439]: time="2025-09-10T00:09:54.650778047Z" level=info msg="CreateContainer within sandbox \"33a1df9dd352ae3f08e3cf55d5dc37d147540a71c6badbd38153f90b80048b12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:09:54.664036 containerd[1439]: time="2025-09-10T00:09:54.663992176Z" level=info msg="CreateContainer within sandbox \"eb3923083bbd73333f22af02bcaac50fabb3365a0bc1c8941f15ab7c0c47514d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"396dbb733485163bc7b0f358df54d3153183ff80ee575e2ec7a58222761cdd45\"" Sep 10 00:09:54.665921 containerd[1439]: time="2025-09-10T00:09:54.665875422Z" level=info msg="StartContainer for \"396dbb733485163bc7b0f358df54d3153183ff80ee575e2ec7a58222761cdd45\"" Sep 10 00:09:54.667275 containerd[1439]: time="2025-09-10T00:09:54.667234296Z" level=info msg="CreateContainer within sandbox \"33a1df9dd352ae3f08e3cf55d5dc37d147540a71c6badbd38153f90b80048b12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f01a9c7342ab5f28171f79f1e7e4dced1ff9b23b64848caf4446cfab2f7b2261\"" Sep 10 00:09:54.668783 containerd[1439]: time="2025-09-10T00:09:54.668751854Z" level=info msg="StartContainer for \"f01a9c7342ab5f28171f79f1e7e4dced1ff9b23b64848caf4446cfab2f7b2261\"" Sep 10 00:09:54.695216 systemd[1]: Started cri-containerd-f01a9c7342ab5f28171f79f1e7e4dced1ff9b23b64848caf4446cfab2f7b2261.scope - libcontainer container f01a9c7342ab5f28171f79f1e7e4dced1ff9b23b64848caf4446cfab2f7b2261. Sep 10 00:09:54.698157 systemd[1]: Started cri-containerd-396dbb733485163bc7b0f358df54d3153183ff80ee575e2ec7a58222761cdd45.scope - libcontainer container 396dbb733485163bc7b0f358df54d3153183ff80ee575e2ec7a58222761cdd45. Sep 10 00:09:54.722718 containerd[1439]: time="2025-09-10T00:09:54.722587631Z" level=info msg="StartContainer for \"396dbb733485163bc7b0f358df54d3153183ff80ee575e2ec7a58222761cdd45\" returns successfully" Sep 10 00:09:54.722718 containerd[1439]: time="2025-09-10T00:09:54.722584111Z" level=info msg="StartContainer for \"f01a9c7342ab5f28171f79f1e7e4dced1ff9b23b64848caf4446cfab2f7b2261\" returns successfully" Sep 10 00:09:54.779010 kubelet[2450]: E0910 00:09:54.778584 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:54.783454 kubelet[2450]: E0910 00:09:54.783425 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:54.793160 kubelet[2450]: I0910 00:09:54.793059 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6flq6" podStartSLOduration=27.793027981 podStartE2EDuration="27.793027981s" podCreationTimestamp="2025-09-10 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:54.791511343 +0000 UTC m=+33.226001581" watchObservedRunningTime="2025-09-10 00:09:54.793027981 +0000 UTC m=+33.227518179" Sep 10 00:09:54.805293 kubelet[2450]: I0910 00:09:54.805229 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9jxm2" podStartSLOduration=27.805209083 podStartE2EDuration="27.805209083s" podCreationTimestamp="2025-09-10 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:09:54.804080975 +0000 UTC m=+33.238571213" watchObservedRunningTime="2025-09-10 00:09:54.805209083 +0000 UTC m=+33.239699321" Sep 10 00:09:55.588068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477425791.mount: Deactivated successfully. Sep 10 00:09:55.785379 kubelet[2450]: E0910 00:09:55.785335 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:55.785889 kubelet[2450]: E0910 00:09:55.785860 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:56.787365 kubelet[2450]: E0910 00:09:56.786722 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:56.787365 kubelet[2450]: E0910 00:09:56.787298 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:09:58.890590 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:36804.service - OpenSSH per-connection server daemon (10.0.0.1:36804). Sep 10 00:09:58.928323 sshd[3895]: Accepted publickey for core from 10.0.0.1 port 36804 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:09:58.929746 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:09:58.934263 systemd-logind[1418]: New session 9 of user core. Sep 10 00:09:58.956204 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:09:59.067967 sshd[3895]: pam_unix(sshd:session): session closed for user core Sep 10 00:09:59.071383 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:36804.service: Deactivated successfully. Sep 10 00:09:59.073626 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:09:59.074211 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:09:59.074910 systemd-logind[1418]: Removed session 9. Sep 10 00:10:04.082620 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:44308.service - OpenSSH per-connection server daemon (10.0.0.1:44308). Sep 10 00:10:04.115929 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 44308 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:04.117259 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:04.120942 systemd-logind[1418]: New session 10 of user core. Sep 10 00:10:04.132216 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:10:04.242954 sshd[3914]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:04.248506 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:44308.service: Deactivated successfully. Sep 10 00:10:04.252275 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:10:04.252843 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:10:04.253685 systemd-logind[1418]: Removed session 10. Sep 10 00:10:09.257019 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:44320.service - OpenSSH per-connection server daemon (10.0.0.1:44320). Sep 10 00:10:09.292488 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 44320 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:09.293866 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:09.297644 systemd-logind[1418]: New session 11 of user core. Sep 10 00:10:09.305241 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:10:09.438326 sshd[3930]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:09.450967 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:44320.service: Deactivated successfully. Sep 10 00:10:09.452477 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:10:09.454632 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:10:09.455105 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:44334.service - OpenSSH per-connection server daemon (10.0.0.1:44334). Sep 10 00:10:09.456482 systemd-logind[1418]: Removed session 11. Sep 10 00:10:09.489502 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 44334 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:09.490968 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:09.494878 systemd-logind[1418]: New session 12 of user core. Sep 10 00:10:09.505222 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:10:09.667224 sshd[3945]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:09.682401 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:44334.service: Deactivated successfully. Sep 10 00:10:09.687725 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:10:09.690576 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:10:09.698911 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:44344.service - OpenSSH per-connection server daemon (10.0.0.1:44344). Sep 10 00:10:09.704444 systemd-logind[1418]: Removed session 12. Sep 10 00:10:09.740078 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 44344 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:09.740965 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:09.745023 systemd-logind[1418]: New session 13 of user core. Sep 10 00:10:09.757277 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:10:09.874877 sshd[3957]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:09.878445 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:10:09.878691 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:44344.service: Deactivated successfully. Sep 10 00:10:09.881475 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:10:09.882472 systemd-logind[1418]: Removed session 13. Sep 10 00:10:14.889624 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:37328.service - OpenSSH per-connection server daemon (10.0.0.1:37328). Sep 10 00:10:14.922478 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 37328 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:14.923780 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:14.927476 systemd-logind[1418]: New session 14 of user core. Sep 10 00:10:14.941232 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:10:15.049946 sshd[3971]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:15.052988 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:10:15.053343 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:37328.service: Deactivated successfully. Sep 10 00:10:15.055005 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:10:15.056578 systemd-logind[1418]: Removed session 14. Sep 10 00:10:20.060493 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:57994.service - OpenSSH per-connection server daemon (10.0.0.1:57994). Sep 10 00:10:20.094303 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:20.095503 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:20.099717 systemd-logind[1418]: New session 15 of user core. Sep 10 00:10:20.107210 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:10:20.218671 sshd[3985]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:20.227496 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:57994.service: Deactivated successfully. Sep 10 00:10:20.228926 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:10:20.231421 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:10:20.237480 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Sep 10 00:10:20.238842 systemd-logind[1418]: Removed session 15. Sep 10 00:10:20.267269 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:20.268425 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:20.272635 systemd-logind[1418]: New session 16 of user core. Sep 10 00:10:20.279273 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:10:20.472692 sshd[4000]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:20.487848 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:58002.service: Deactivated successfully. Sep 10 00:10:20.489433 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:10:20.491203 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:10:20.492193 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:58004.service - OpenSSH per-connection server daemon (10.0.0.1:58004). Sep 10 00:10:20.493564 systemd-logind[1418]: Removed session 16. Sep 10 00:10:20.532915 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 58004 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:20.534233 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:20.538225 systemd-logind[1418]: New session 17 of user core. Sep 10 00:10:20.548192 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:10:21.827618 sshd[4012]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:21.836999 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:58004.service: Deactivated successfully. Sep 10 00:10:21.840711 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:10:21.842216 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:10:21.852352 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Sep 10 00:10:21.853361 systemd-logind[1418]: Removed session 17. Sep 10 00:10:21.884662 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:21.886185 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:21.889765 systemd-logind[1418]: New session 18 of user core. Sep 10 00:10:21.896187 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:10:22.113691 sshd[4041]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:22.122790 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:58016.service: Deactivated successfully. Sep 10 00:10:22.124469 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:10:22.126910 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:10:22.135569 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:58024.service - OpenSSH per-connection server daemon (10.0.0.1:58024). Sep 10 00:10:22.136507 systemd-logind[1418]: Removed session 18. Sep 10 00:10:22.165050 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 58024 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:22.167066 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:22.170885 systemd-logind[1418]: New session 19 of user core. Sep 10 00:10:22.181206 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:10:22.288624 sshd[4054]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:22.291238 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:58024.service: Deactivated successfully. Sep 10 00:10:22.292700 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:10:22.293979 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:10:22.294886 systemd-logind[1418]: Removed session 19. Sep 10 00:10:27.299683 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:58036.service - OpenSSH per-connection server daemon (10.0.0.1:58036). Sep 10 00:10:27.332672 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 58036 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:27.333835 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:27.337642 systemd-logind[1418]: New session 20 of user core. Sep 10 00:10:27.347172 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:10:27.452272 sshd[4072]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:27.455348 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:58036.service: Deactivated successfully. Sep 10 00:10:27.458589 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:10:27.459429 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:10:27.460360 systemd-logind[1418]: Removed session 20. Sep 10 00:10:32.462778 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:56096.service - OpenSSH per-connection server daemon (10.0.0.1:56096). Sep 10 00:10:32.495247 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 56096 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:32.496499 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:32.499923 systemd-logind[1418]: New session 21 of user core. Sep 10 00:10:32.509189 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:10:32.614893 sshd[4088]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:32.618349 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:56096.service: Deactivated successfully. Sep 10 00:10:32.620276 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:10:32.620920 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:10:32.621611 systemd-logind[1418]: Removed session 21. Sep 10 00:10:37.625645 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:56102.service - OpenSSH per-connection server daemon (10.0.0.1:56102). Sep 10 00:10:37.659699 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 56102 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:37.660505 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:37.663727 systemd-logind[1418]: New session 22 of user core. Sep 10 00:10:37.677189 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:10:37.785022 sshd[4102]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:37.798467 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:56102.service: Deactivated successfully. Sep 10 00:10:37.799994 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:10:37.803072 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:10:37.812296 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:56116.service - OpenSSH per-connection server daemon (10.0.0.1:56116). Sep 10 00:10:37.813206 systemd-logind[1418]: Removed session 22. Sep 10 00:10:37.841750 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 56116 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:37.842964 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:37.846220 systemd-logind[1418]: New session 23 of user core. Sep 10 00:10:37.859187 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:10:38.643118 kubelet[2450]: E0910 00:10:38.643074 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:39.790264 containerd[1439]: time="2025-09-10T00:10:39.790227277Z" level=info msg="StopContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" with timeout 30 (s)" Sep 10 00:10:39.791255 containerd[1439]: time="2025-09-10T00:10:39.791230636Z" level=info msg="Stop container \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" with signal terminated" Sep 10 00:10:39.800960 systemd[1]: cri-containerd-0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af.scope: Deactivated successfully. Sep 10 00:10:39.816900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af-rootfs.mount: Deactivated successfully. Sep 10 00:10:39.827528 containerd[1439]: time="2025-09-10T00:10:39.827466786Z" level=info msg="StopContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" with timeout 2 (s)" Sep 10 00:10:39.827950 containerd[1439]: time="2025-09-10T00:10:39.827930426Z" level=info msg="Stop container \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" with signal terminated" Sep 10 00:10:39.829471 containerd[1439]: time="2025-09-10T00:10:39.829426984Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:10:39.830173 containerd[1439]: time="2025-09-10T00:10:39.830129863Z" level=info msg="shim disconnected" id=0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af namespace=k8s.io Sep 10 00:10:39.830229 containerd[1439]: time="2025-09-10T00:10:39.830174543Z" level=warning msg="cleaning up after shim disconnected" id=0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af namespace=k8s.io Sep 10 00:10:39.830229 containerd[1439]: time="2025-09-10T00:10:39.830184143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:39.833137 systemd-networkd[1370]: lxc_health: Link DOWN Sep 10 00:10:39.833143 systemd-networkd[1370]: lxc_health: Lost carrier Sep 10 00:10:39.865684 systemd[1]: cri-containerd-cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290.scope: Deactivated successfully. Sep 10 00:10:39.865959 systemd[1]: cri-containerd-cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290.scope: Consumed 6.353s CPU time. Sep 10 00:10:39.874438 containerd[1439]: time="2025-09-10T00:10:39.874397403Z" level=info msg="StopContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" returns successfully" Sep 10 00:10:39.874976 containerd[1439]: time="2025-09-10T00:10:39.874951802Z" level=info msg="StopPodSandbox for \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\"" Sep 10 00:10:39.875056 containerd[1439]: time="2025-09-10T00:10:39.874987682Z" level=info msg="Container to stop \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.876858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3-shm.mount: Deactivated successfully. Sep 10 00:10:39.882078 systemd[1]: cri-containerd-e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3.scope: Deactivated successfully. Sep 10 00:10:39.893842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290-rootfs.mount: Deactivated successfully. Sep 10 00:10:39.899634 containerd[1439]: time="2025-09-10T00:10:39.899583729Z" level=info msg="shim disconnected" id=cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290 namespace=k8s.io Sep 10 00:10:39.899634 containerd[1439]: time="2025-09-10T00:10:39.899632689Z" level=warning msg="cleaning up after shim disconnected" id=cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290 namespace=k8s.io Sep 10 00:10:39.899807 containerd[1439]: time="2025-09-10T00:10:39.899643489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:39.902284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3-rootfs.mount: Deactivated successfully. Sep 10 00:10:39.906133 containerd[1439]: time="2025-09-10T00:10:39.906081480Z" level=info msg="shim disconnected" id=e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3 namespace=k8s.io Sep 10 00:10:39.906133 containerd[1439]: time="2025-09-10T00:10:39.906130760Z" level=warning msg="cleaning up after shim disconnected" id=e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3 namespace=k8s.io Sep 10 00:10:39.906298 containerd[1439]: time="2025-09-10T00:10:39.906139120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:39.921611 containerd[1439]: time="2025-09-10T00:10:39.921510459Z" level=info msg="StopContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" returns successfully" Sep 10 00:10:39.922057 containerd[1439]: time="2025-09-10T00:10:39.922024858Z" level=info msg="StopPodSandbox for \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\"" Sep 10 00:10:39.922111 containerd[1439]: time="2025-09-10T00:10:39.922072698Z" level=info msg="Container to stop \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.922111 containerd[1439]: time="2025-09-10T00:10:39.922085898Z" level=info msg="Container to stop \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.922111 containerd[1439]: time="2025-09-10T00:10:39.922096938Z" level=info msg="Container to stop \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.922111 containerd[1439]: time="2025-09-10T00:10:39.922108298Z" level=info msg="Container to stop \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.922222 containerd[1439]: time="2025-09-10T00:10:39.922118058Z" level=info msg="Container to stop \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:10:39.932219 containerd[1439]: time="2025-09-10T00:10:39.932095605Z" level=info msg="TearDown network for sandbox \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\" successfully" Sep 10 00:10:39.932219 containerd[1439]: time="2025-09-10T00:10:39.932119685Z" level=info msg="StopPodSandbox for \"e3a646ca5f8591900dd0f9d184bcd79880d5011dcd03ea27a855d44e54acffe3\" returns successfully" Sep 10 00:10:39.935417 systemd[1]: cri-containerd-dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a.scope: Deactivated successfully. Sep 10 00:10:39.958299 containerd[1439]: time="2025-09-10T00:10:39.958232449Z" level=info msg="shim disconnected" id=dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a namespace=k8s.io Sep 10 00:10:39.958299 containerd[1439]: time="2025-09-10T00:10:39.958288769Z" level=warning msg="cleaning up after shim disconnected" id=dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a namespace=k8s.io Sep 10 00:10:39.958299 containerd[1439]: time="2025-09-10T00:10:39.958301489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:39.968236 containerd[1439]: time="2025-09-10T00:10:39.968190236Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:10:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 00:10:39.969105 containerd[1439]: time="2025-09-10T00:10:39.969075314Z" level=info msg="TearDown network for sandbox \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" successfully" Sep 10 00:10:39.969158 containerd[1439]: time="2025-09-10T00:10:39.969107274Z" level=info msg="StopPodSandbox for \"dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a\" returns successfully" Sep 10 00:10:40.046890 kubelet[2450]: I0910 00:10:40.046783 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-etc-cni-netd\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.046890 kubelet[2450]: I0910 00:10:40.046826 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-config-path\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.046890 kubelet[2450]: I0910 00:10:40.046844 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hostproc\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.046890 kubelet[2450]: I0910 00:10:40.046867 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hubble-tls\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.046890 kubelet[2450]: I0910 00:10:40.046883 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-net\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.046899 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l88s9\" (UniqueName: \"kubernetes.io/projected/64c2349a-5daa-47dd-b860-8667a549ed85-kube-api-access-l88s9\") pod \"64c2349a-5daa-47dd-b860-8667a549ed85\" (UID: \"64c2349a-5daa-47dd-b860-8667a549ed85\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.046917 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh5d5\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-kube-api-access-nh5d5\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.046931 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-kernel\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.046947 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-clustermesh-secrets\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.046962 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64c2349a-5daa-47dd-b860-8667a549ed85-cilium-config-path\") pod \"64c2349a-5daa-47dd-b860-8667a549ed85\" (UID: \"64c2349a-5daa-47dd-b860-8667a549ed85\") " Sep 10 00:10:40.047408 kubelet[2450]: I0910 00:10:40.047002 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-run\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047570 kubelet[2450]: I0910 00:10:40.047017 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cni-path\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047570 kubelet[2450]: I0910 00:10:40.047032 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-bpf-maps\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047570 kubelet[2450]: I0910 00:10:40.047063 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-lib-modules\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047570 kubelet[2450]: I0910 00:10:40.047080 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-cgroup\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.047570 kubelet[2450]: I0910 00:10:40.047094 2450 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-xtables-lock\") pod \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\" (UID: \"ed374fb3-8398-4da3-9ad3-df1de07b0c9d\") " Sep 10 00:10:40.053007 kubelet[2450]: I0910 00:10:40.052575 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053007 kubelet[2450]: I0910 00:10:40.052805 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053007 kubelet[2450]: I0910 00:10:40.052842 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053007 kubelet[2450]: I0910 00:10:40.052877 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053007 kubelet[2450]: I0910 00:10:40.052905 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053207 kubelet[2450]: I0910 00:10:40.052923 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.053207 kubelet[2450]: I0910 00:10:40.052939 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.055374 kubelet[2450]: I0910 00:10:40.055338 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.055374 kubelet[2450]: I0910 00:10:40.055383 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.056867 kubelet[2450]: I0910 00:10:40.056835 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c2349a-5daa-47dd-b860-8667a549ed85-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64c2349a-5daa-47dd-b860-8667a549ed85" (UID: "64c2349a-5daa-47dd-b860-8667a549ed85"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:10:40.057572 kubelet[2450]: I0910 00:10:40.057537 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:10:40.057643 kubelet[2450]: I0910 00:10:40.057426 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:10:40.057674 kubelet[2450]: I0910 00:10:40.057657 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:10:40.057774 kubelet[2450]: I0910 00:10:40.057737 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c2349a-5daa-47dd-b860-8667a549ed85-kube-api-access-l88s9" (OuterVolumeSpecName: "kube-api-access-l88s9") pod "64c2349a-5daa-47dd-b860-8667a549ed85" (UID: "64c2349a-5daa-47dd-b860-8667a549ed85"). InnerVolumeSpecName "kube-api-access-l88s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:10:40.059065 kubelet[2450]: I0910 00:10:40.059024 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-kube-api-access-nh5d5" (OuterVolumeSpecName: "kube-api-access-nh5d5") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "kube-api-access-nh5d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:10:40.059487 kubelet[2450]: I0910 00:10:40.059462 2450 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed374fb3-8398-4da3-9ad3-df1de07b0c9d" (UID: "ed374fb3-8398-4da3-9ad3-df1de07b0c9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:10:40.147754 kubelet[2450]: I0910 00:10:40.147716 2450 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147754 kubelet[2450]: I0910 00:10:40.147750 2450 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147754 kubelet[2450]: I0910 00:10:40.147761 2450 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147770 2450 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147778 2450 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147786 2450 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147795 2450 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147802 2450 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147811 2450 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147818 2450 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.147905 kubelet[2450]: I0910 00:10:40.147826 2450 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.148104 kubelet[2450]: I0910 00:10:40.147834 2450 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l88s9\" (UniqueName: \"kubernetes.io/projected/64c2349a-5daa-47dd-b860-8667a549ed85-kube-api-access-l88s9\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.148104 kubelet[2450]: I0910 00:10:40.147842 2450 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh5d5\" (UniqueName: \"kubernetes.io/projected/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-kube-api-access-nh5d5\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.148104 kubelet[2450]: I0910 00:10:40.147849 2450 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.148104 kubelet[2450]: I0910 00:10:40.147857 2450 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed374fb3-8398-4da3-9ad3-df1de07b0c9d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.148104 kubelet[2450]: I0910 00:10:40.147865 2450 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64c2349a-5daa-47dd-b860-8667a549ed85-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:10:40.805404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a-rootfs.mount: Deactivated successfully. Sep 10 00:10:40.805505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfb751c88fa87c20a43d699c04df8341f8074711b967533ea37bc9b8544bf81a-shm.mount: Deactivated successfully. Sep 10 00:10:40.805559 systemd[1]: var-lib-kubelet-pods-64c2349a\x2d5daa\x2d47dd\x2db860\x2d8667a549ed85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl88s9.mount: Deactivated successfully. Sep 10 00:10:40.805612 systemd[1]: var-lib-kubelet-pods-ed374fb3\x2d8398\x2d4da3\x2d9ad3\x2ddf1de07b0c9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnh5d5.mount: Deactivated successfully. Sep 10 00:10:40.805666 systemd[1]: var-lib-kubelet-pods-ed374fb3\x2d8398\x2d4da3\x2d9ad3\x2ddf1de07b0c9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:10:40.805715 systemd[1]: var-lib-kubelet-pods-ed374fb3\x2d8398\x2d4da3\x2d9ad3\x2ddf1de07b0c9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:10:40.864081 kubelet[2450]: I0910 00:10:40.863904 2450 scope.go:117] "RemoveContainer" containerID="0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af" Sep 10 00:10:40.865322 containerd[1439]: time="2025-09-10T00:10:40.865245195Z" level=info msg="RemoveContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\"" Sep 10 00:10:40.868917 containerd[1439]: time="2025-09-10T00:10:40.868874951Z" level=info msg="RemoveContainer for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" returns successfully" Sep 10 00:10:40.869181 kubelet[2450]: I0910 00:10:40.869124 2450 scope.go:117] "RemoveContainer" containerID="0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af" Sep 10 00:10:40.869358 containerd[1439]: time="2025-09-10T00:10:40.869288351Z" level=error msg="ContainerStatus for \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\": not found" Sep 10 00:10:40.869513 kubelet[2450]: E0910 00:10:40.869473 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\": not found" containerID="0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af" Sep 10 00:10:40.869625 kubelet[2450]: I0910 00:10:40.869508 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af"} err="failed to get container status \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e879f768fc2f6df2d6e18f0c0de9267e1e26ab5822b08b47f8e706863fed9af\": not found" Sep 10 00:10:40.869625 kubelet[2450]: I0910 00:10:40.869578 2450 scope.go:117] "RemoveContainer" containerID="cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290" Sep 10 00:10:40.870263 systemd[1]: Removed slice kubepods-besteffort-pod64c2349a_5daa_47dd_b860_8667a549ed85.slice - libcontainer container kubepods-besteffort-pod64c2349a_5daa_47dd_b860_8667a549ed85.slice. Sep 10 00:10:40.875010 containerd[1439]: time="2025-09-10T00:10:40.874962385Z" level=info msg="RemoveContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\"" Sep 10 00:10:40.876012 systemd[1]: Removed slice kubepods-burstable-poded374fb3_8398_4da3_9ad3_df1de07b0c9d.slice - libcontainer container kubepods-burstable-poded374fb3_8398_4da3_9ad3_df1de07b0c9d.slice. Sep 10 00:10:40.876151 systemd[1]: kubepods-burstable-poded374fb3_8398_4da3_9ad3_df1de07b0c9d.slice: Consumed 6.428s CPU time. Sep 10 00:10:40.888262 containerd[1439]: time="2025-09-10T00:10:40.888067972Z" level=info msg="RemoveContainer for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" returns successfully" Sep 10 00:10:40.889696 containerd[1439]: time="2025-09-10T00:10:40.889607250Z" level=info msg="RemoveContainer for \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\"" Sep 10 00:10:40.889991 kubelet[2450]: I0910 00:10:40.888315 2450 scope.go:117] "RemoveContainer" containerID="55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5" Sep 10 00:10:40.892413 containerd[1439]: time="2025-09-10T00:10:40.891837208Z" level=info msg="RemoveContainer for \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\" returns successfully" Sep 10 00:10:40.892474 kubelet[2450]: I0910 00:10:40.891998 2450 scope.go:117] "RemoveContainer" containerID="26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb" Sep 10 00:10:40.893178 containerd[1439]: time="2025-09-10T00:10:40.893127207Z" level=info msg="RemoveContainer for \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\"" Sep 10 00:10:40.895954 containerd[1439]: time="2025-09-10T00:10:40.895912884Z" level=info msg="RemoveContainer for \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\" returns successfully" Sep 10 00:10:40.896112 kubelet[2450]: I0910 00:10:40.896081 2450 scope.go:117] "RemoveContainer" containerID="8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3" Sep 10 00:10:40.897023 containerd[1439]: time="2025-09-10T00:10:40.896996683Z" level=info msg="RemoveContainer for \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\"" Sep 10 00:10:40.898981 containerd[1439]: time="2025-09-10T00:10:40.898956761Z" level=info msg="RemoveContainer for \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\" returns successfully" Sep 10 00:10:40.899136 kubelet[2450]: I0910 00:10:40.899108 2450 scope.go:117] "RemoveContainer" containerID="c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd" Sep 10 00:10:40.900005 containerd[1439]: time="2025-09-10T00:10:40.899980040Z" level=info msg="RemoveContainer for \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\"" Sep 10 00:10:40.919912 containerd[1439]: time="2025-09-10T00:10:40.919871540Z" level=info msg="RemoveContainer for \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\" returns successfully" Sep 10 00:10:40.920336 kubelet[2450]: I0910 00:10:40.920310 2450 scope.go:117] "RemoveContainer" containerID="cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290" Sep 10 00:10:40.921078 containerd[1439]: time="2025-09-10T00:10:40.920615219Z" level=error msg="ContainerStatus for \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\": not found" Sep 10 00:10:40.921078 containerd[1439]: time="2025-09-10T00:10:40.920987899Z" level=error msg="ContainerStatus for \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\": not found" Sep 10 00:10:40.921192 kubelet[2450]: E0910 00:10:40.920753 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\": not found" containerID="cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290" Sep 10 00:10:40.921192 kubelet[2450]: I0910 00:10:40.920787 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290"} err="failed to get container status \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\": rpc error: code = NotFound desc = an error occurred when try to find container \"cac778727c5b7366b38ec4e5421af84d32c554678d58ac8a4de588274523f290\": not found" Sep 10 00:10:40.921192 kubelet[2450]: I0910 00:10:40.920809 2450 scope.go:117] "RemoveContainer" containerID="55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5" Sep 10 00:10:40.921192 kubelet[2450]: E0910 00:10:40.921085 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\": not found" containerID="55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5" Sep 10 00:10:40.921192 kubelet[2450]: I0910 00:10:40.921106 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5"} err="failed to get container status \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"55d2bfa4c768a1259356579bdaec327d86118c5e9fda4eb2e3035134b0d03bb5\": not found" Sep 10 00:10:40.921192 kubelet[2450]: I0910 00:10:40.921122 2450 scope.go:117] "RemoveContainer" containerID="26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb" Sep 10 00:10:40.921529 containerd[1439]: time="2025-09-10T00:10:40.921443618Z" level=error msg="ContainerStatus for \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\": not found" Sep 10 00:10:40.921635 kubelet[2450]: E0910 00:10:40.921554 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\": not found" containerID="26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb" Sep 10 00:10:40.921635 kubelet[2450]: I0910 00:10:40.921573 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb"} err="failed to get container status \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"26db23b75f6f4f01cbe8614e511e9e1068c1916ab435e021d75b1a922f6e19bb\": not found" Sep 10 00:10:40.921635 kubelet[2450]: I0910 00:10:40.921593 2450 scope.go:117] "RemoveContainer" containerID="8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3" Sep 10 00:10:40.921975 containerd[1439]: time="2025-09-10T00:10:40.921734338Z" level=error msg="ContainerStatus for \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\": not found" Sep 10 00:10:40.922049 kubelet[2450]: E0910 00:10:40.922024 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\": not found" containerID="8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3" Sep 10 00:10:40.922079 kubelet[2450]: I0910 00:10:40.922057 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3"} err="failed to get container status \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ed75de5c34083ae4f14fdddd2bcb438522dfe8e270bfcfc6b5be37ed0b237e3\": not found" Sep 10 00:10:40.922079 kubelet[2450]: I0910 00:10:40.922071 2450 scope.go:117] "RemoveContainer" containerID="c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd" Sep 10 00:10:40.922320 containerd[1439]: time="2025-09-10T00:10:40.922248217Z" level=error msg="ContainerStatus for \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\": not found" Sep 10 00:10:40.922365 kubelet[2450]: E0910 00:10:40.922346 2450 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\": not found" containerID="c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd" Sep 10 00:10:40.922441 kubelet[2450]: I0910 00:10:40.922370 2450 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd"} err="failed to get container status \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c223da056b61abf3a5e627776db8f469f1ae0e62c951ee8176c2e95243c73bdd\": not found" Sep 10 00:10:41.642551 kubelet[2450]: E0910 00:10:41.642448 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:41.644767 kubelet[2450]: I0910 00:10:41.644717 2450 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c2349a-5daa-47dd-b860-8667a549ed85" path="/var/lib/kubelet/pods/64c2349a-5daa-47dd-b860-8667a549ed85/volumes" Sep 10 00:10:41.645353 kubelet[2450]: I0910 00:10:41.645193 2450 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" path="/var/lib/kubelet/pods/ed374fb3-8398-4da3-9ad3-df1de07b0c9d/volumes" Sep 10 00:10:41.703934 kubelet[2450]: E0910 00:10:41.703882 2450 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:10:41.755910 sshd[4116]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:41.765629 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:56116.service: Deactivated successfully. Sep 10 00:10:41.768245 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:10:41.768397 systemd[1]: session-23.scope: Consumed 1.278s CPU time. Sep 10 00:10:41.769494 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:10:41.780308 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:40142.service - OpenSSH per-connection server daemon (10.0.0.1:40142). Sep 10 00:10:41.781226 systemd-logind[1418]: Removed session 23. Sep 10 00:10:41.809508 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 40142 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:41.810813 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:41.814765 systemd-logind[1418]: New session 24 of user core. Sep 10 00:10:41.823193 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:10:42.556184 sshd[4279]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:42.568975 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:40142.service: Deactivated successfully. Sep 10 00:10:42.573637 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:10:42.575183 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:10:42.577808 systemd-logind[1418]: Removed session 24. Sep 10 00:10:42.589320 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:40146.service - OpenSSH per-connection server daemon (10.0.0.1:40146). Sep 10 00:10:42.597918 kubelet[2450]: E0910 00:10:42.597883 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="mount-bpf-fs" Sep 10 00:10:42.597918 kubelet[2450]: E0910 00:10:42.597913 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="clean-cilium-state" Sep 10 00:10:42.597918 kubelet[2450]: E0910 00:10:42.597920 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64c2349a-5daa-47dd-b860-8667a549ed85" containerName="cilium-operator" Sep 10 00:10:42.597918 kubelet[2450]: E0910 00:10:42.597926 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="mount-cgroup" Sep 10 00:10:42.598107 kubelet[2450]: E0910 00:10:42.597948 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="apply-sysctl-overwrites" Sep 10 00:10:42.598107 kubelet[2450]: E0910 00:10:42.597956 2450 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="cilium-agent" Sep 10 00:10:42.598107 kubelet[2450]: I0910 00:10:42.597983 2450 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c2349a-5daa-47dd-b860-8667a549ed85" containerName="cilium-operator" Sep 10 00:10:42.598107 kubelet[2450]: I0910 00:10:42.597989 2450 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed374fb3-8398-4da3-9ad3-df1de07b0c9d" containerName="cilium-agent" Sep 10 00:10:42.604629 systemd[1]: Created slice kubepods-burstable-pod61983c79_f4b9_4ed3_bb86_d22f37bd9666.slice - libcontainer container kubepods-burstable-pod61983c79_f4b9_4ed3_bb86_d22f37bd9666.slice. Sep 10 00:10:42.638008 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 40146 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:42.640280 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:42.642408 kubelet[2450]: E0910 00:10:42.642384 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:42.647781 systemd-logind[1418]: New session 25 of user core. Sep 10 00:10:42.654251 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664767 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61983c79-f4b9-4ed3-bb86-d22f37bd9666-hubble-tls\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664811 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb4z7\" (UniqueName: \"kubernetes.io/projected/61983c79-f4b9-4ed3-bb86-d22f37bd9666-kube-api-access-fb4z7\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664837 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-host-proc-sys-kernel\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664852 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-bpf-maps\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664869 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-cni-path\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665063 kubelet[2450]: I0910 00:10:42.664884 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-host-proc-sys-net\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.664898 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-cilium-cgroup\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.664917 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-etc-cni-netd\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.664931 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-lib-modules\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.664945 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61983c79-f4b9-4ed3-bb86-d22f37bd9666-cilium-config-path\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.664961 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/61983c79-f4b9-4ed3-bb86-d22f37bd9666-cilium-ipsec-secrets\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665465 kubelet[2450]: I0910 00:10:42.665000 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-cilium-run\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665588 kubelet[2450]: I0910 00:10:42.665018 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-xtables-lock\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665588 kubelet[2450]: I0910 00:10:42.665034 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61983c79-f4b9-4ed3-bb86-d22f37bd9666-hostproc\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.665588 kubelet[2450]: I0910 00:10:42.665071 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61983c79-f4b9-4ed3-bb86-d22f37bd9666-clustermesh-secrets\") pod \"cilium-jhrwh\" (UID: \"61983c79-f4b9-4ed3-bb86-d22f37bd9666\") " pod="kube-system/cilium-jhrwh" Sep 10 00:10:42.707174 sshd[4292]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:42.722719 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:40146.service: Deactivated successfully. Sep 10 00:10:42.724296 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:10:42.725600 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:10:42.740391 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:40154.service - OpenSSH per-connection server daemon (10.0.0.1:40154). Sep 10 00:10:42.741317 systemd-logind[1418]: Removed session 25. Sep 10 00:10:42.770454 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 40154 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:10:42.773366 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:10:42.784902 systemd-logind[1418]: New session 26 of user core. Sep 10 00:10:42.794195 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:10:42.910346 kubelet[2450]: E0910 00:10:42.909444 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:42.911002 containerd[1439]: time="2025-09-10T00:10:42.910919371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhrwh,Uid:61983c79-f4b9-4ed3-bb86-d22f37bd9666,Namespace:kube-system,Attempt:0,}" Sep 10 00:10:42.933122 containerd[1439]: time="2025-09-10T00:10:42.932845923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:10:42.933122 containerd[1439]: time="2025-09-10T00:10:42.932895723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:10:42.933122 containerd[1439]: time="2025-09-10T00:10:42.932907203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:42.933122 containerd[1439]: time="2025-09-10T00:10:42.932983363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:10:42.959221 systemd[1]: Started cri-containerd-199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e.scope - libcontainer container 199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e. Sep 10 00:10:42.977118 containerd[1439]: time="2025-09-10T00:10:42.976935868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhrwh,Uid:61983c79-f4b9-4ed3-bb86-d22f37bd9666,Namespace:kube-system,Attempt:0,} returns sandbox id \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\"" Sep 10 00:10:42.977692 kubelet[2450]: E0910 00:10:42.977672 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:42.979635 containerd[1439]: time="2025-09-10T00:10:42.979472947Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:10:42.989768 containerd[1439]: time="2025-09-10T00:10:42.989710503Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2\"" Sep 10 00:10:42.993583 containerd[1439]: time="2025-09-10T00:10:42.992159062Z" level=info msg="StartContainer for \"7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2\"" Sep 10 00:10:43.014190 systemd[1]: Started cri-containerd-7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2.scope - libcontainer container 7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2. Sep 10 00:10:43.036025 containerd[1439]: time="2025-09-10T00:10:43.035973738Z" level=info msg="StartContainer for \"7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2\" returns successfully" Sep 10 00:10:43.045305 systemd[1]: cri-containerd-7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2.scope: Deactivated successfully. Sep 10 00:10:43.087949 containerd[1439]: time="2025-09-10T00:10:43.087817215Z" level=info msg="shim disconnected" id=7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2 namespace=k8s.io Sep 10 00:10:43.087949 containerd[1439]: time="2025-09-10T00:10:43.087877815Z" level=warning msg="cleaning up after shim disconnected" id=7dba749a0e57c9abdc2af53183b86e02f5e661247cf139233aa03f2195b7cfa2 namespace=k8s.io Sep 10 00:10:43.087949 containerd[1439]: time="2025-09-10T00:10:43.087886175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:43.130191 kubelet[2450]: I0910 00:10:43.130113 2450 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T00:10:43Z","lastTransitionTime":"2025-09-10T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 00:10:43.769885 systemd[1]: run-containerd-runc-k8s.io-199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e-runc.Z5IXYh.mount: Deactivated successfully. Sep 10 00:10:43.880113 kubelet[2450]: E0910 00:10:43.879939 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:43.883340 containerd[1439]: time="2025-09-10T00:10:43.883060340Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:10:43.897733 containerd[1439]: time="2025-09-10T00:10:43.897661859Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2\"" Sep 10 00:10:43.898334 containerd[1439]: time="2025-09-10T00:10:43.898206699Z" level=info msg="StartContainer for \"211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2\"" Sep 10 00:10:43.926211 systemd[1]: Started cri-containerd-211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2.scope - libcontainer container 211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2. Sep 10 00:10:43.946264 containerd[1439]: time="2025-09-10T00:10:43.946191537Z" level=info msg="StartContainer for \"211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2\" returns successfully" Sep 10 00:10:43.951869 systemd[1]: cri-containerd-211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2.scope: Deactivated successfully. Sep 10 00:10:43.967419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2-rootfs.mount: Deactivated successfully. Sep 10 00:10:43.973772 containerd[1439]: time="2025-09-10T00:10:43.973706416Z" level=info msg="shim disconnected" id=211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2 namespace=k8s.io Sep 10 00:10:43.973772 containerd[1439]: time="2025-09-10T00:10:43.973769896Z" level=warning msg="cleaning up after shim disconnected" id=211586e167b7e299071b3ca75128bdc62b5c2adbcfe63945ff87a6cfe80811e2 namespace=k8s.io Sep 10 00:10:43.973937 containerd[1439]: time="2025-09-10T00:10:43.973778175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:44.882617 kubelet[2450]: E0910 00:10:44.882575 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:44.885449 containerd[1439]: time="2025-09-10T00:10:44.885311682Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:10:44.899677 containerd[1439]: time="2025-09-10T00:10:44.899635886Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89\"" Sep 10 00:10:44.900345 containerd[1439]: time="2025-09-10T00:10:44.900322086Z" level=info msg="StartContainer for \"e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89\"" Sep 10 00:10:44.935181 systemd[1]: Started cri-containerd-e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89.scope - libcontainer container e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89. Sep 10 00:10:44.958125 containerd[1439]: time="2025-09-10T00:10:44.957612221Z" level=info msg="StartContainer for \"e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89\" returns successfully" Sep 10 00:10:44.960506 systemd[1]: cri-containerd-e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89.scope: Deactivated successfully. Sep 10 00:10:44.979899 containerd[1439]: time="2025-09-10T00:10:44.979719827Z" level=info msg="shim disconnected" id=e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89 namespace=k8s.io Sep 10 00:10:44.979899 containerd[1439]: time="2025-09-10T00:10:44.979775507Z" level=warning msg="cleaning up after shim disconnected" id=e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89 namespace=k8s.io Sep 10 00:10:44.979899 containerd[1439]: time="2025-09-10T00:10:44.979783707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:45.885865 kubelet[2450]: E0910 00:10:45.885822 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:45.888468 containerd[1439]: time="2025-09-10T00:10:45.888426241Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:10:45.894349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0799da862d0f16cffce046bafa32dd81e8ef66d7fc850de0ea31eebc092ef89-rootfs.mount: Deactivated successfully. Sep 10 00:10:45.904350 containerd[1439]: time="2025-09-10T00:10:45.904303890Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6\"" Sep 10 00:10:45.904936 containerd[1439]: time="2025-09-10T00:10:45.904889650Z" level=info msg="StartContainer for \"72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6\"" Sep 10 00:10:45.929190 systemd[1]: Started cri-containerd-72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6.scope - libcontainer container 72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6. Sep 10 00:10:45.947443 systemd[1]: cri-containerd-72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6.scope: Deactivated successfully. Sep 10 00:10:45.948500 containerd[1439]: time="2025-09-10T00:10:45.948397074Z" level=info msg="StartContainer for \"72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6\" returns successfully" Sep 10 00:10:45.965564 containerd[1439]: time="2025-09-10T00:10:45.965384364Z" level=info msg="shim disconnected" id=72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6 namespace=k8s.io Sep 10 00:10:45.965564 containerd[1439]: time="2025-09-10T00:10:45.965430684Z" level=warning msg="cleaning up after shim disconnected" id=72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6 namespace=k8s.io Sep 10 00:10:45.965564 containerd[1439]: time="2025-09-10T00:10:45.965439044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:10:46.643032 kubelet[2450]: E0910 00:10:46.642687 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:46.705405 kubelet[2450]: E0910 00:10:46.705321 2450 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:10:46.889984 kubelet[2450]: E0910 00:10:46.889912 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:46.893641 containerd[1439]: time="2025-09-10T00:10:46.893444688Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:10:46.894856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72aeb1027c29231200ff3ad472031b760e7b787c7472734978b452c9d9c480f6-rootfs.mount: Deactivated successfully. Sep 10 00:10:46.911783 containerd[1439]: time="2025-09-10T00:10:46.911687663Z" level=info msg="CreateContainer within sandbox \"199a49c290a0b7c0915fdc3a5f715bc2d337ea88a40a08ccb2fd647affde4c5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40\"" Sep 10 00:10:46.912427 containerd[1439]: time="2025-09-10T00:10:46.912399184Z" level=info msg="StartContainer for \"3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40\"" Sep 10 00:10:46.946182 systemd[1]: Started cri-containerd-3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40.scope - libcontainer container 3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40. Sep 10 00:10:46.970031 containerd[1439]: time="2025-09-10T00:10:46.969982992Z" level=info msg="StartContainer for \"3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40\" returns successfully" Sep 10 00:10:47.228056 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 00:10:47.894899 systemd[1]: run-containerd-runc-k8s.io-3ec55b001597a6fe17fd23823d6de6f935b81402ff72ebb04787233b3253be40-runc.9ty5t0.mount: Deactivated successfully. Sep 10 00:10:47.896915 kubelet[2450]: E0910 00:10:47.896868 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:48.910841 kubelet[2450]: E0910 00:10:48.910796 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:49.265139 kubelet[2450]: E0910 00:10:49.265099 2450 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42122->127.0.0.1:34185: write tcp 127.0.0.1:42122->127.0.0.1:34185: write: broken pipe Sep 10 00:10:49.998149 systemd-networkd[1370]: lxc_health: Link UP Sep 10 00:10:50.008507 systemd-networkd[1370]: lxc_health: Gained carrier Sep 10 00:10:50.916924 kubelet[2450]: E0910 00:10:50.916795 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:50.935472 kubelet[2450]: I0910 00:10:50.935411 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jhrwh" podStartSLOduration=8.935395063 podStartE2EDuration="8.935395063s" podCreationTimestamp="2025-09-10 00:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:10:47.913487591 +0000 UTC m=+86.347977829" watchObservedRunningTime="2025-09-10 00:10:50.935395063 +0000 UTC m=+89.369885301" Sep 10 00:10:51.904138 kubelet[2450]: E0910 00:10:51.904088 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:51.909393 systemd-networkd[1370]: lxc_health: Gained IPv6LL Sep 10 00:10:52.906597 kubelet[2450]: E0910 00:10:52.906554 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:10:55.612952 kubelet[2450]: E0910 00:10:55.612911 2450 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42150->127.0.0.1:34185: write tcp 127.0.0.1:42150->127.0.0.1:34185: write: broken pipe Sep 10 00:10:55.615813 sshd[4300]: pam_unix(sshd:session): session closed for user core Sep 10 00:10:55.618569 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:40154.service: Deactivated successfully. Sep 10 00:10:55.620471 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:10:55.623378 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:10:55.624653 systemd-logind[1418]: Removed session 26.