Sep 4 16:09:20.764559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 16:09:20.764582 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 4 14:32:27 -00 2025 Sep 4 16:09:20.764590 kernel: KASLR enabled Sep 4 16:09:20.764596 kernel: efi: EFI v2.7 by EDK II Sep 4 16:09:20.764601 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 4 16:09:20.764607 kernel: random: crng init done Sep 4 16:09:20.764614 kernel: secureboot: Secure boot disabled Sep 4 16:09:20.764620 kernel: ACPI: Early table checksum verification disabled Sep 4 16:09:20.764627 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 4 16:09:20.764633 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 16:09:20.764639 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764645 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764651 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764657 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764666 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764672 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764679 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764685 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764692 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:09:20.764698 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 16:09:20.764704 kernel: ACPI: Use ACPI SPCR as default console: No Sep 4 16:09:20.764710 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 16:09:20.764718 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 4 16:09:20.764724 kernel: Zone ranges: Sep 4 16:09:20.764730 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 16:09:20.764736 kernel: DMA32 empty Sep 4 16:09:20.764743 kernel: Normal empty Sep 4 16:09:20.764749 kernel: Device empty Sep 4 16:09:20.764755 kernel: Movable zone start for each node Sep 4 16:09:20.764761 kernel: Early memory node ranges Sep 4 16:09:20.764767 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 4 16:09:20.764774 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 4 16:09:20.764780 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 4 16:09:20.764786 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 4 16:09:20.764793 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 4 16:09:20.764800 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 4 16:09:20.764806 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 4 16:09:20.764812 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 4 16:09:20.764818 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 4 16:09:20.764825 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 16:09:20.764835 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 16:09:20.764841 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 16:09:20.764848 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 16:09:20.764855 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 16:09:20.764862 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 16:09:20.764868 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 4 16:09:20.764875 kernel: psci: probing for conduit method from ACPI. Sep 4 16:09:20.764882 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 16:09:20.764890 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 16:09:20.764896 kernel: psci: Trusted OS migration not required Sep 4 16:09:20.764903 kernel: psci: SMC Calling Convention v1.1 Sep 4 16:09:20.764910 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 16:09:20.764917 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 4 16:09:20.764924 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 4 16:09:20.764930 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 16:09:20.764937 kernel: Detected PIPT I-cache on CPU0 Sep 4 16:09:20.764944 kernel: CPU features: detected: GIC system register CPU interface Sep 4 16:09:20.764950 kernel: CPU features: detected: Spectre-v4 Sep 4 16:09:20.764957 kernel: CPU features: detected: Spectre-BHB Sep 4 16:09:20.764965 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 16:09:20.764972 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 16:09:20.764978 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 16:09:20.764985 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 16:09:20.764992 kernel: alternatives: applying boot alternatives Sep 4 16:09:20.764999 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 16:09:20.765006 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 16:09:20.765013 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 16:09:20.765020 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 16:09:20.765027 kernel: Fallback order for Node 0: 0 Sep 4 16:09:20.765035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 4 16:09:20.765041 kernel: Policy zone: DMA Sep 4 16:09:20.765048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 16:09:20.765055 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 4 16:09:20.765061 kernel: software IO TLB: area num 4. Sep 4 16:09:20.765068 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 4 16:09:20.765075 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 4 16:09:20.765082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 16:09:20.765088 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 16:09:20.765096 kernel: rcu: RCU event tracing is enabled. Sep 4 16:09:20.765103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 16:09:20.765111 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 16:09:20.765118 kernel: Tracing variant of Tasks RCU enabled. Sep 4 16:09:20.765125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 16:09:20.765131 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 16:09:20.765138 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:09:20.765145 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:09:20.765152 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 16:09:20.765158 kernel: GICv3: 256 SPIs implemented Sep 4 16:09:20.765165 kernel: GICv3: 0 Extended SPIs implemented Sep 4 16:09:20.765172 kernel: Root IRQ handler: gic_handle_irq Sep 4 16:09:20.765178 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 16:09:20.765186 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 4 16:09:20.765193 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 16:09:20.765200 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 16:09:20.765207 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 4 16:09:20.765214 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 4 16:09:20.765221 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 4 16:09:20.765247 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 4 16:09:20.765255 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 16:09:20.765261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 16:09:20.765268 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 16:09:20.765275 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 16:09:20.765284 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 16:09:20.765291 kernel: arm-pv: using stolen time PV Sep 4 16:09:20.765298 kernel: Console: colour dummy device 80x25 Sep 4 16:09:20.765306 kernel: ACPI: Core revision 20240827 Sep 4 16:09:20.765313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 16:09:20.765320 kernel: pid_max: default: 32768 minimum: 301 Sep 4 16:09:20.765327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 16:09:20.765334 kernel: landlock: Up and running. Sep 4 16:09:20.765343 kernel: SELinux: Initializing. Sep 4 16:09:20.765350 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:09:20.765357 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:09:20.765364 kernel: rcu: Hierarchical SRCU implementation. Sep 4 16:09:20.765371 kernel: rcu: Max phase no-delay instances is 400. Sep 4 16:09:20.765379 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 16:09:20.765386 kernel: Remapping and enabling EFI services. Sep 4 16:09:20.765394 kernel: smp: Bringing up secondary CPUs ... Sep 4 16:09:20.765406 kernel: Detected PIPT I-cache on CPU1 Sep 4 16:09:20.765413 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 16:09:20.765422 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 4 16:09:20.765429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 16:09:20.765436 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 16:09:20.765444 kernel: Detected PIPT I-cache on CPU2 Sep 4 16:09:20.765452 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 16:09:20.765460 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 4 16:09:20.765468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 16:09:20.765475 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 16:09:20.765483 kernel: Detected PIPT I-cache on CPU3 Sep 4 16:09:20.765490 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 16:09:20.765498 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 4 16:09:20.765506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 16:09:20.765513 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 16:09:20.765521 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 16:09:20.765528 kernel: SMP: Total of 4 processors activated. Sep 4 16:09:20.765535 kernel: CPU: All CPU(s) started at EL1 Sep 4 16:09:20.765543 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 16:09:20.765550 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 16:09:20.765559 kernel: CPU features: detected: Common not Private translations Sep 4 16:09:20.765566 kernel: CPU features: detected: CRC32 instructions Sep 4 16:09:20.765573 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 16:09:20.765581 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 16:09:20.765588 kernel: CPU features: detected: LSE atomic instructions Sep 4 16:09:20.765596 kernel: CPU features: detected: Privileged Access Never Sep 4 16:09:20.765603 kernel: CPU features: detected: RAS Extension Support Sep 4 16:09:20.765611 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 16:09:20.765619 kernel: alternatives: applying system-wide alternatives Sep 4 16:09:20.765626 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 4 16:09:20.765634 kernel: Memory: 2424352K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 39104K init, 1038K bss, 125600K reserved, 16384K cma-reserved) Sep 4 16:09:20.765642 kernel: devtmpfs: initialized Sep 4 16:09:20.765649 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 16:09:20.765657 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 16:09:20.765664 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 16:09:20.765672 kernel: 0 pages in range for non-PLT usage Sep 4 16:09:20.765680 kernel: 508528 pages in range for PLT usage Sep 4 16:09:20.765687 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 16:09:20.765694 kernel: SMBIOS 3.0.0 present. Sep 4 16:09:20.765702 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 4 16:09:20.765709 kernel: DMI: Memory slots populated: 1/1 Sep 4 16:09:20.765716 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 16:09:20.765725 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 16:09:20.765733 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 16:09:20.765741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 16:09:20.765748 kernel: audit: initializing netlink subsys (disabled) Sep 4 16:09:20.765756 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 4 16:09:20.765763 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 16:09:20.765770 kernel: cpuidle: using governor menu Sep 4 16:09:20.765779 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 16:09:20.765787 kernel: ASID allocator initialised with 32768 entries Sep 4 16:09:20.765794 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 16:09:20.765801 kernel: Serial: AMBA PL011 UART driver Sep 4 16:09:20.765809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 16:09:20.765816 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 16:09:20.765824 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 16:09:20.765832 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 16:09:20.765839 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 16:09:20.765847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 16:09:20.765854 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 16:09:20.765862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 16:09:20.765869 kernel: ACPI: Added _OSI(Module Device) Sep 4 16:09:20.765876 kernel: ACPI: Added _OSI(Processor Device) Sep 4 16:09:20.765883 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 16:09:20.765892 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 16:09:20.765899 kernel: ACPI: Interpreter enabled Sep 4 16:09:20.765907 kernel: ACPI: Using GIC for interrupt routing Sep 4 16:09:20.765914 kernel: ACPI: MCFG table detected, 1 entries Sep 4 16:09:20.765921 kernel: ACPI: CPU0 has been hot-added Sep 4 16:09:20.765928 kernel: ACPI: CPU1 has been hot-added Sep 4 16:09:20.765936 kernel: ACPI: CPU2 has been hot-added Sep 4 16:09:20.765943 kernel: ACPI: CPU3 has been hot-added Sep 4 16:09:20.765952 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 16:09:20.765959 kernel: printk: legacy console [ttyAMA0] enabled Sep 4 16:09:20.765967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 16:09:20.766115 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 16:09:20.766201 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 16:09:20.766308 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 16:09:20.766393 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 16:09:20.766472 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 16:09:20.766481 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 16:09:20.766489 kernel: PCI host bridge to bus 0000:00 Sep 4 16:09:20.766570 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 16:09:20.766641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 16:09:20.766712 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 16:09:20.766781 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 16:09:20.766872 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 4 16:09:20.766961 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 16:09:20.767039 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 4 16:09:20.767119 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 4 16:09:20.767200 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 16:09:20.767304 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 4 16:09:20.767385 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 4 16:09:20.767463 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 4 16:09:20.767534 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 16:09:20.767606 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 16:09:20.767675 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 16:09:20.767685 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 16:09:20.767692 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 16:09:20.767700 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 16:09:20.767707 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 16:09:20.767716 kernel: iommu: Default domain type: Translated Sep 4 16:09:20.767724 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 16:09:20.767731 kernel: efivars: Registered efivars operations Sep 4 16:09:20.767739 kernel: vgaarb: loaded Sep 4 16:09:20.767746 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 16:09:20.767753 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 16:09:20.767761 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 16:09:20.767769 kernel: pnp: PnP ACPI init Sep 4 16:09:20.767859 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 16:09:20.767870 kernel: pnp: PnP ACPI: found 1 devices Sep 4 16:09:20.767878 kernel: NET: Registered PF_INET protocol family Sep 4 16:09:20.767885 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 16:09:20.767893 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 16:09:20.767901 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 16:09:20.767910 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 16:09:20.767917 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 16:09:20.767925 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 16:09:20.767932 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:09:20.767940 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:09:20.767947 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 16:09:20.767955 kernel: PCI: CLS 0 bytes, default 64 Sep 4 16:09:20.767963 kernel: kvm [1]: HYP mode not available Sep 4 16:09:20.767971 kernel: Initialise system trusted keyrings Sep 4 16:09:20.767978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 16:09:20.767986 kernel: Key type asymmetric registered Sep 4 16:09:20.767993 kernel: Asymmetric key parser 'x509' registered Sep 4 16:09:20.768000 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 4 16:09:20.768008 kernel: io scheduler mq-deadline registered Sep 4 16:09:20.768016 kernel: io scheduler kyber registered Sep 4 16:09:20.768024 kernel: io scheduler bfq registered Sep 4 16:09:20.768031 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 16:09:20.768039 kernel: ACPI: button: Power Button [PWRB] Sep 4 16:09:20.768047 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 16:09:20.768124 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 16:09:20.768134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 16:09:20.768143 kernel: thunder_xcv, ver 1.0 Sep 4 16:09:20.768150 kernel: thunder_bgx, ver 1.0 Sep 4 16:09:20.768158 kernel: nicpf, ver 1.0 Sep 4 16:09:20.768165 kernel: nicvf, ver 1.0 Sep 4 16:09:20.768270 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 16:09:20.768358 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T16:09:20 UTC (1757002160) Sep 4 16:09:20.768369 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 16:09:20.768379 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 4 16:09:20.768387 kernel: watchdog: NMI not fully supported Sep 4 16:09:20.768394 kernel: watchdog: Hard watchdog permanently disabled Sep 4 16:09:20.768402 kernel: NET: Registered PF_INET6 protocol family Sep 4 16:09:20.768409 kernel: Segment Routing with IPv6 Sep 4 16:09:20.768416 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 16:09:20.768424 kernel: NET: Registered PF_PACKET protocol family Sep 4 16:09:20.768432 kernel: Key type dns_resolver registered Sep 4 16:09:20.768440 kernel: registered taskstats version 1 Sep 4 16:09:20.768447 kernel: Loading compiled-in X.509 certificates Sep 4 16:09:20.768455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 5cbaeb2a956cf8364fe17c89324cc000891c1e4c' Sep 4 16:09:20.768462 kernel: Demotion targets for Node 0: null Sep 4 16:09:20.768470 kernel: Key type .fscrypt registered Sep 4 16:09:20.768477 kernel: Key type fscrypt-provisioning registered Sep 4 16:09:20.768485 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 16:09:20.768493 kernel: ima: Allocated hash algorithm: sha1 Sep 4 16:09:20.768500 kernel: ima: No architecture policies found Sep 4 16:09:20.768508 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 16:09:20.768515 kernel: clk: Disabling unused clocks Sep 4 16:09:20.768523 kernel: PM: genpd: Disabling unused power domains Sep 4 16:09:20.768530 kernel: Warning: unable to open an initial console. Sep 4 16:09:20.768538 kernel: Freeing unused kernel memory: 39104K Sep 4 16:09:20.768546 kernel: Run /init as init process Sep 4 16:09:20.768553 kernel: with arguments: Sep 4 16:09:20.768560 kernel: /init Sep 4 16:09:20.768568 kernel: with environment: Sep 4 16:09:20.768575 kernel: HOME=/ Sep 4 16:09:20.768582 kernel: TERM=linux Sep 4 16:09:20.768591 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 16:09:20.768599 systemd[1]: Successfully made /usr/ read-only. Sep 4 16:09:20.768610 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:09:20.768618 systemd[1]: Detected virtualization kvm. Sep 4 16:09:20.768626 systemd[1]: Detected architecture arm64. Sep 4 16:09:20.768634 systemd[1]: Running in initrd. Sep 4 16:09:20.768643 systemd[1]: No hostname configured, using default hostname. Sep 4 16:09:20.768651 systemd[1]: Hostname set to . Sep 4 16:09:20.768659 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:09:20.768667 systemd[1]: Queued start job for default target initrd.target. Sep 4 16:09:20.768675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:09:20.768683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:09:20.768693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 16:09:20.768701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:09:20.768709 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 16:09:20.768718 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 16:09:20.768727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 16:09:20.768735 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 16:09:20.768744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:09:20.768752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:09:20.768760 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:09:20.768768 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:09:20.768776 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:09:20.768784 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:09:20.768792 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:09:20.768802 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:09:20.768809 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 16:09:20.768817 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 16:09:20.768825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:09:20.768833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:09:20.768841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:09:20.768851 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:09:20.768859 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 16:09:20.768867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:09:20.768875 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 16:09:20.768883 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 16:09:20.768892 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 16:09:20.768899 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:09:20.768909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:09:20.768917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:09:20.768925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 16:09:20.768933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:09:20.768942 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 16:09:20.768950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:09:20.768973 systemd-journald[242]: Collecting audit messages is disabled. Sep 4 16:09:20.768993 systemd-journald[242]: Journal started Sep 4 16:09:20.769011 systemd-journald[242]: Runtime Journal (/run/log/journal/c8453111323f4081b1179f087f8e1364) is 6M, max 48.5M, 42.4M free. Sep 4 16:09:20.759505 systemd-modules-load[244]: Inserted module 'overlay' Sep 4 16:09:20.772243 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 16:09:20.772265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:09:20.774925 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 4 16:09:20.776711 kernel: Bridge firewalling registered Sep 4 16:09:20.776728 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:09:20.777984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:09:20.779874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:09:20.783661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 16:09:20.785189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:09:20.786772 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:09:20.798736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:09:20.806043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:09:20.807713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:09:20.808187 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 16:09:20.810586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:09:20.815395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:09:20.828354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:09:20.830354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 16:09:20.853489 systemd-resolved[282]: Positive Trust Anchors: Sep 4 16:09:20.853507 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:09:20.853510 systemd-resolved[282]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:09:20.853541 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:09:20.858278 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 4 16:09:20.859043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:09:20.862320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:09:20.865314 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 16:09:20.933249 kernel: SCSI subsystem initialized Sep 4 16:09:20.937262 kernel: Loading iSCSI transport class v2.0-870. Sep 4 16:09:20.945258 kernel: iscsi: registered transport (tcp) Sep 4 16:09:20.957329 kernel: iscsi: registered transport (qla4xxx) Sep 4 16:09:20.957362 kernel: QLogic iSCSI HBA Driver Sep 4 16:09:20.973883 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:09:20.998347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:09:20.999565 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:09:21.042921 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 16:09:21.045009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 16:09:21.103286 kernel: raid6: neonx8 gen() 15572 MB/s Sep 4 16:09:21.120274 kernel: raid6: neonx4 gen() 15591 MB/s Sep 4 16:09:21.137263 kernel: raid6: neonx2 gen() 12975 MB/s Sep 4 16:09:21.154262 kernel: raid6: neonx1 gen() 10435 MB/s Sep 4 16:09:21.171286 kernel: raid6: int64x8 gen() 6892 MB/s Sep 4 16:09:21.188283 kernel: raid6: int64x4 gen() 7340 MB/s Sep 4 16:09:21.205282 kernel: raid6: int64x2 gen() 6090 MB/s Sep 4 16:09:21.222476 kernel: raid6: int64x1 gen() 5047 MB/s Sep 4 16:09:21.222503 kernel: raid6: using algorithm neonx4 gen() 15591 MB/s Sep 4 16:09:21.240371 kernel: raid6: .... xor() 12342 MB/s, rmw enabled Sep 4 16:09:21.240388 kernel: raid6: using neon recovery algorithm Sep 4 16:09:21.246664 kernel: xor: measuring software checksum speed Sep 4 16:09:21.246685 kernel: 8regs : 21630 MB/sec Sep 4 16:09:21.246695 kernel: 32regs : 21681 MB/sec Sep 4 16:09:21.247303 kernel: arm64_neon : 28070 MB/sec Sep 4 16:09:21.247316 kernel: xor: using function: arm64_neon (28070 MB/sec) Sep 4 16:09:21.299275 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 16:09:21.304917 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:09:21.307166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:09:21.341444 systemd-udevd[503]: Using default interface naming scheme 'v257'. Sep 4 16:09:21.345428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:09:21.347401 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 16:09:21.375977 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Sep 4 16:09:21.396054 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:09:21.398023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:09:21.446697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:09:21.449677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 16:09:21.497260 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 16:09:21.497701 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 16:09:21.501922 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 16:09:21.501962 kernel: GPT:9289727 != 19775487 Sep 4 16:09:21.501973 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 16:09:21.507896 kernel: GPT:9289727 != 19775487 Sep 4 16:09:21.508078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:09:21.508194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:09:21.511839 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 16:09:21.511858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:09:21.509844 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:09:21.513332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:09:21.537925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 16:09:21.540307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:09:21.546282 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 16:09:21.558195 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 16:09:21.564023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 16:09:21.565017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 16:09:21.573263 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:09:21.574169 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:09:21.576005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:09:21.577704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:09:21.579810 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 16:09:21.581571 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 16:09:21.596269 disk-uuid[595]: Primary Header is updated. Sep 4 16:09:21.596269 disk-uuid[595]: Secondary Entries is updated. Sep 4 16:09:21.596269 disk-uuid[595]: Secondary Header is updated. Sep 4 16:09:21.601259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:09:21.601482 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:09:22.607491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:09:22.608300 disk-uuid[598]: The operation has completed successfully. Sep 4 16:09:22.637360 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 16:09:22.637472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 16:09:22.654766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 16:09:22.677932 sh[614]: Success Sep 4 16:09:22.691122 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 16:09:22.691164 kernel: device-mapper: uevent: version 1.0.3 Sep 4 16:09:22.691179 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 16:09:22.698271 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 4 16:09:22.719645 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 16:09:22.722105 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 16:09:22.735163 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 16:09:22.739248 kernel: BTRFS: device fsid d6826f11-765e-43ab-9425-5cf9fd7ef603 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (626) Sep 4 16:09:22.741456 kernel: BTRFS info (device dm-0): first mount of filesystem d6826f11-765e-43ab-9425-5cf9fd7ef603 Sep 4 16:09:22.741479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 16:09:22.745340 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 16:09:22.745354 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 16:09:22.746213 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 16:09:22.747792 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:09:22.748756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 16:09:22.749420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 16:09:22.750721 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 16:09:22.773160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 4 16:09:22.773204 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 16:09:22.773215 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 16:09:22.776773 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:09:22.776804 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:09:22.781270 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 16:09:22.782126 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 16:09:22.784122 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 16:09:22.841598 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:09:22.844509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:09:22.877913 ignition[704]: Ignition 2.22.0 Sep 4 16:09:22.877928 ignition[704]: Stage: fetch-offline Sep 4 16:09:22.877957 ignition[704]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:22.877965 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:22.878036 ignition[704]: parsed url from cmdline: "" Sep 4 16:09:22.878040 ignition[704]: no config URL provided Sep 4 16:09:22.878044 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 16:09:22.878051 ignition[704]: no config at "/usr/lib/ignition/user.ign" Sep 4 16:09:22.878067 ignition[704]: op(1): [started] loading QEMU firmware config module Sep 4 16:09:22.878072 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 16:09:22.882561 ignition[704]: op(1): [finished] loading QEMU firmware config module Sep 4 16:09:22.896163 systemd-networkd[803]: lo: Link UP Sep 4 16:09:22.896176 systemd-networkd[803]: lo: Gained carrier Sep 4 16:09:22.896861 systemd-networkd[803]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:09:22.896864 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:09:22.896987 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:09:22.897660 systemd-networkd[803]: eth0: Link UP Sep 4 16:09:22.897953 systemd-networkd[803]: eth0: Gained carrier Sep 4 16:09:22.897961 systemd-networkd[803]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:09:22.898332 systemd[1]: Reached target network.target - Network. Sep 4 16:09:22.924269 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:09:22.933141 ignition[704]: parsing config with SHA512: 004c2dbceb8623c03b64991f6788358532a7511e0be25502f8c102753373731ef9f53751574d9808603a03daa36b12460b9f4a8327e7365f0e39d77669f456c3 Sep 4 16:09:22.938151 unknown[704]: fetched base config from "system" Sep 4 16:09:22.938160 unknown[704]: fetched user config from "qemu" Sep 4 16:09:22.938630 ignition[704]: fetch-offline: fetch-offline passed Sep 4 16:09:22.938692 ignition[704]: Ignition finished successfully Sep 4 16:09:22.940882 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:09:22.942304 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 16:09:22.943067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 16:09:22.967914 ignition[813]: Ignition 2.22.0 Sep 4 16:09:22.967932 ignition[813]: Stage: kargs Sep 4 16:09:22.968054 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:22.968063 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:22.968800 ignition[813]: kargs: kargs passed Sep 4 16:09:22.971254 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 16:09:22.968839 ignition[813]: Ignition finished successfully Sep 4 16:09:22.975359 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 16:09:23.002624 ignition[821]: Ignition 2.22.0 Sep 4 16:09:23.002641 ignition[821]: Stage: disks Sep 4 16:09:23.002763 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:23.002771 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:23.003536 ignition[821]: disks: disks passed Sep 4 16:09:23.005534 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 16:09:23.003575 ignition[821]: Ignition finished successfully Sep 4 16:09:23.006514 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 16:09:23.009409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 16:09:23.010350 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:09:23.011652 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:09:23.013046 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:09:23.015343 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 16:09:23.041829 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 16:09:23.046859 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 16:09:23.051012 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 16:09:23.106254 kernel: EXT4-fs (vda9): mounted filesystem 1afcf1f8-650a-49cc-971e-a57f02cf6533 r/w with ordered data mode. Quota mode: none. Sep 4 16:09:23.106457 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 16:09:23.107435 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 16:09:23.109911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:09:23.111346 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 16:09:23.112901 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 16:09:23.112939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 16:09:23.112960 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:09:23.121561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 16:09:23.123833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 16:09:23.128384 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 4 16:09:23.128409 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 16:09:23.128421 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 16:09:23.131599 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:09:23.131642 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:09:23.131944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:09:23.158490 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 16:09:23.162557 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 4 16:09:23.166005 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 16:09:23.169363 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 16:09:23.231436 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 16:09:23.233387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 16:09:23.234750 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 16:09:23.249250 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 16:09:23.266497 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 16:09:23.280764 ignition[953]: INFO : Ignition 2.22.0 Sep 4 16:09:23.280764 ignition[953]: INFO : Stage: mount Sep 4 16:09:23.282120 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:23.282120 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:23.282120 ignition[953]: INFO : mount: mount passed Sep 4 16:09:23.282120 ignition[953]: INFO : Ignition finished successfully Sep 4 16:09:23.283704 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 16:09:23.285979 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 16:09:23.873847 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 16:09:23.875304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:09:23.898897 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 4 16:09:23.898933 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 16:09:23.900028 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 16:09:23.902886 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:09:23.902905 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:09:23.904113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:09:23.936975 ignition[983]: INFO : Ignition 2.22.0 Sep 4 16:09:23.936975 ignition[983]: INFO : Stage: files Sep 4 16:09:23.938363 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:23.938363 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:23.938363 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 4 16:09:23.941165 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 16:09:23.941165 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 16:09:23.941165 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 16:09:23.944433 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 16:09:23.944433 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 16:09:23.944433 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 16:09:23.944433 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 16:09:23.942066 unknown[983]: wrote ssh authorized keys file for user: core Sep 4 16:09:23.988418 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 16:09:24.317297 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 16:09:24.317297 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 16:09:24.317297 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 16:09:24.517244 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 16:09:24.612536 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:09:24.614070 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 16:09:24.624910 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 16:09:24.954375 systemd-networkd[803]: eth0: Gained IPv6LL Sep 4 16:09:25.222092 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 16:09:25.595863 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 16:09:25.595863 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 16:09:25.599258 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 16:09:25.612241 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:09:25.617966 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:09:25.619187 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 16:09:25.619187 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 16:09:25.619187 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 16:09:25.619187 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:09:25.619187 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:09:25.619187 ignition[983]: INFO : files: files passed Sep 4 16:09:25.619187 ignition[983]: INFO : Ignition finished successfully Sep 4 16:09:25.620808 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 16:09:25.625410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 16:09:25.636865 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 16:09:25.641542 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 16:09:25.641630 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 16:09:25.647260 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 16:09:25.652159 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:09:25.652159 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:09:25.654731 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:09:25.655174 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:09:25.657316 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 16:09:25.659462 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 16:09:25.693673 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 16:09:25.693777 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 16:09:25.695611 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 16:09:25.697010 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 16:09:25.698451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 16:09:25.699103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 16:09:25.746072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:09:25.748075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 16:09:25.766705 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:09:25.768459 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:09:25.770314 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 16:09:25.771047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 16:09:25.771172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:09:25.773122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 16:09:25.774817 systemd[1]: Stopped target basic.target - Basic System. Sep 4 16:09:25.776151 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 16:09:25.777637 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:09:25.779186 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 16:09:25.781137 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:09:25.782841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 16:09:25.784291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:09:25.785861 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 16:09:25.787717 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 16:09:25.789120 systemd[1]: Stopped target swap.target - Swaps. Sep 4 16:09:25.790391 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 16:09:25.790524 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:09:25.792370 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:09:25.793953 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:09:25.795473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 16:09:25.796306 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:09:25.798054 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 16:09:25.798158 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 16:09:25.800756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 16:09:25.800868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:09:25.802359 systemd[1]: Stopped target paths.target - Path Units. Sep 4 16:09:25.803744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 16:09:25.803828 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:09:25.805350 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 16:09:25.806671 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 16:09:25.808135 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 16:09:25.808222 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:09:25.810074 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 16:09:25.810146 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:09:25.811366 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 16:09:25.811474 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:09:25.812833 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 16:09:25.812930 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 16:09:25.814863 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 16:09:25.816555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 16:09:25.818656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 16:09:25.822165 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:09:25.823876 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 16:09:25.823969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:09:25.825675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 16:09:25.825766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:09:25.831780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 16:09:25.831889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 16:09:25.839775 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 16:09:25.853298 ignition[1038]: INFO : Ignition 2.22.0 Sep 4 16:09:25.853298 ignition[1038]: INFO : Stage: umount Sep 4 16:09:25.853298 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:09:25.853298 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:09:25.858225 ignition[1038]: INFO : umount: umount passed Sep 4 16:09:25.858225 ignition[1038]: INFO : Ignition finished successfully Sep 4 16:09:25.856065 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 16:09:25.856154 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 16:09:25.857509 systemd[1]: Stopped target network.target - Network. Sep 4 16:09:25.861032 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 16:09:25.861094 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 16:09:25.862269 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 16:09:25.862306 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 16:09:25.863619 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 16:09:25.863662 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 16:09:25.865056 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 16:09:25.865094 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 16:09:25.866723 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 16:09:25.871614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 16:09:25.877336 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 16:09:25.877423 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 16:09:25.885454 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 16:09:25.886222 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 16:09:25.889250 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 16:09:25.890905 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 16:09:25.890935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:09:25.894322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 16:09:25.895157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 16:09:25.895254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:09:25.897579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:09:25.897621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:09:25.899088 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 16:09:25.899127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 16:09:25.901138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:09:25.911516 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 16:09:25.916058 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:09:25.918979 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 16:09:25.919070 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 16:09:25.920750 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 16:09:25.920831 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 16:09:25.922347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 16:09:25.922397 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 16:09:25.923270 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 16:09:25.923302 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:09:25.924696 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 16:09:25.924741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:09:25.927062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 16:09:25.927108 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 16:09:25.929515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 16:09:25.929562 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:09:25.932406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 16:09:25.932455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 16:09:25.934760 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 16:09:25.935673 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 16:09:25.935723 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:09:25.937294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 16:09:25.937330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:09:25.939299 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 16:09:25.939338 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:09:25.940843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 16:09:25.940878 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:09:25.942432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:09:25.942472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:09:25.952138 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 16:09:25.953257 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 16:09:25.954571 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 16:09:25.956689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 16:09:25.970774 systemd[1]: Switching root. Sep 4 16:09:26.000411 systemd-journald[242]: Journal stopped Sep 4 16:09:26.775962 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Sep 4 16:09:26.776010 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 16:09:26.776029 kernel: SELinux: policy capability open_perms=1 Sep 4 16:09:26.776040 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 16:09:26.776050 kernel: SELinux: policy capability always_check_network=0 Sep 4 16:09:26.776061 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 16:09:26.776071 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 16:09:26.776081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 16:09:26.776090 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 16:09:26.776102 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 16:09:26.776112 systemd[1]: Successfully loaded SELinux policy in 53.554ms. Sep 4 16:09:26.776130 kernel: audit: type=1403 audit(1757002166.201:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 16:09:26.776145 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.024ms. Sep 4 16:09:26.776160 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:09:26.776171 systemd[1]: Detected virtualization kvm. Sep 4 16:09:26.776182 systemd[1]: Detected architecture arm64. Sep 4 16:09:26.776192 systemd[1]: Detected first boot. Sep 4 16:09:26.776203 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:09:26.776225 kernel: NET: Registered PF_VSOCK protocol family Sep 4 16:09:26.776265 zram_generator::config[1082]: No configuration found. Sep 4 16:09:26.776278 systemd[1]: Populated /etc with preset unit settings. Sep 4 16:09:26.776289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 16:09:26.776299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 16:09:26.776310 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 16:09:26.776321 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 16:09:26.776332 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 16:09:26.776344 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 16:09:26.776355 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 16:09:26.776366 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 16:09:26.776378 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 16:09:26.776388 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 16:09:26.776399 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 16:09:26.776409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:09:26.776422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:09:26.776433 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 16:09:26.776444 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 16:09:26.776456 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 16:09:26.776466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:09:26.776482 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 16:09:26.776492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:09:26.776504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:09:26.776514 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 16:09:26.776525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 16:09:26.776535 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 16:09:26.776546 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 16:09:26.776556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:09:26.776570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:09:26.776580 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:09:26.776592 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:09:26.776603 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 16:09:26.776613 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 16:09:26.776624 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 16:09:26.776634 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:09:26.776647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:09:26.776658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:09:26.776669 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 16:09:26.776680 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 16:09:26.776690 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 16:09:26.776701 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 16:09:26.776711 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 16:09:26.776723 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 16:09:26.776734 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 16:09:26.776744 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 16:09:26.776755 systemd[1]: Reached target machines.target - Containers. Sep 4 16:09:26.776766 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 16:09:26.776776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:09:26.776787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:09:26.776799 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 16:09:26.776809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:09:26.776820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:09:26.776831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:09:26.776841 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 16:09:26.776852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:09:26.776863 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 16:09:26.776875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 16:09:26.776886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 16:09:26.776897 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 16:09:26.776907 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 16:09:26.776918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:09:26.776928 kernel: loop: module loaded Sep 4 16:09:26.776940 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:09:26.776951 kernel: fuse: init (API version 7.41) Sep 4 16:09:26.776961 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:09:26.776972 kernel: ACPI: bus type drm_connector registered Sep 4 16:09:26.776982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:09:26.776993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 16:09:26.777004 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 16:09:26.777015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:09:26.777028 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 16:09:26.777039 systemd[1]: Stopped verity-setup.service. Sep 4 16:09:26.777049 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 16:09:26.777059 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 16:09:26.777071 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 16:09:26.777082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 16:09:26.777110 systemd-journald[1161]: Collecting audit messages is disabled. Sep 4 16:09:26.777130 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 16:09:26.777142 systemd-journald[1161]: Journal started Sep 4 16:09:26.777164 systemd-journald[1161]: Runtime Journal (/run/log/journal/c8453111323f4081b1179f087f8e1364) is 6M, max 48.5M, 42.4M free. Sep 4 16:09:26.566148 systemd[1]: Queued start job for default target multi-user.target. Sep 4 16:09:26.584047 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 16:09:26.584462 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 16:09:26.779786 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:09:26.780724 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 16:09:26.783273 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 16:09:26.784397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:09:26.785544 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 16:09:26.785715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 16:09:26.786852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:09:26.787000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:09:26.788145 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:09:26.788329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:09:26.789363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:09:26.789509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:09:26.790667 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 16:09:26.790817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 16:09:26.791879 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:09:26.792042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:09:26.793205 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:09:26.794474 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:09:26.796453 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 16:09:26.798093 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 16:09:26.810008 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:09:26.811334 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 16:09:26.813172 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 16:09:26.814990 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 16:09:26.815929 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 16:09:26.815955 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:09:26.817596 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 16:09:26.818796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:09:26.825346 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 16:09:26.827066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 16:09:26.828135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:09:26.829177 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 16:09:26.830262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:09:26.835262 systemd-journald[1161]: Time spent on flushing to /var/log/journal/c8453111323f4081b1179f087f8e1364 is 20.167ms for 883 entries. Sep 4 16:09:26.835262 systemd-journald[1161]: System Journal (/var/log/journal/c8453111323f4081b1179f087f8e1364) is 8M, max 195.6M, 187.6M free. Sep 4 16:09:26.874388 systemd-journald[1161]: Received client request to flush runtime journal. Sep 4 16:09:26.874437 kernel: loop0: detected capacity change from 0 to 100608 Sep 4 16:09:26.874455 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 16:09:26.832352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:09:26.834414 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 16:09:26.837069 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:09:26.843290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:09:26.844851 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 16:09:26.847392 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 16:09:26.850281 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 16:09:26.852976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 16:09:26.857397 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 16:09:26.861892 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 4 16:09:26.861902 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 4 16:09:26.866295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:09:26.869514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 16:09:26.872375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:09:26.875638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 16:09:26.885251 kernel: loop1: detected capacity change from 0 to 207008 Sep 4 16:09:26.886569 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 16:09:26.902556 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 16:09:26.905022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:09:26.906864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:09:26.912267 kernel: loop2: detected capacity change from 0 to 119320 Sep 4 16:09:26.918766 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 16:09:26.922654 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 4 16:09:26.922894 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 4 16:09:26.934470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:09:26.938250 kernel: loop3: detected capacity change from 0 to 100608 Sep 4 16:09:26.946249 kernel: loop4: detected capacity change from 0 to 207008 Sep 4 16:09:26.953253 kernel: loop5: detected capacity change from 0 to 119320 Sep 4 16:09:26.956612 (sd-merge)[1230]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 16:09:26.959356 (sd-merge)[1230]: Merged extensions into '/usr'. Sep 4 16:09:26.962548 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 16:09:26.962563 systemd[1]: Reloading... Sep 4 16:09:27.018358 zram_generator::config[1263]: No configuration found. Sep 4 16:09:27.030848 systemd-resolved[1224]: Positive Trust Anchors: Sep 4 16:09:27.030868 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:09:27.030871 systemd-resolved[1224]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:09:27.030902 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:09:27.038370 systemd-resolved[1224]: Defaulting to hostname 'linux'. Sep 4 16:09:27.151661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 16:09:27.151742 systemd[1]: Reloading finished in 188 ms. Sep 4 16:09:27.190908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 16:09:27.192186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:09:27.195241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 16:09:27.197905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:09:27.224392 systemd[1]: Starting ensure-sysext.service... Sep 4 16:09:27.225997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:09:27.234795 systemd[1]: Reload requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Sep 4 16:09:27.234811 systemd[1]: Reloading... Sep 4 16:09:27.241676 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 16:09:27.241968 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 16:09:27.242306 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 16:09:27.242588 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 16:09:27.243295 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 16:09:27.243591 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Sep 4 16:09:27.243698 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Sep 4 16:09:27.247312 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:09:27.247405 systemd-tmpfiles[1298]: Skipping /boot Sep 4 16:09:27.253032 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:09:27.253052 systemd-tmpfiles[1298]: Skipping /boot Sep 4 16:09:27.284272 zram_generator::config[1331]: No configuration found. Sep 4 16:09:27.408972 systemd[1]: Reloading finished in 173 ms. Sep 4 16:09:27.428088 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 16:09:27.441632 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:09:27.448501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:09:27.450406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 16:09:27.460726 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 16:09:27.464108 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 16:09:27.467541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:09:27.469742 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 16:09:27.474203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:09:27.479493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:09:27.481870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:09:27.486473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:09:27.487558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:09:27.487703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:09:27.488840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:09:27.489002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:09:27.496544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:09:27.498898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:09:27.500042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:09:27.500195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:09:27.502363 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 16:09:27.505644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 16:09:27.514154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 16:09:27.515982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:09:27.516546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:09:27.519529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:09:27.519740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:09:27.519937 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Sep 4 16:09:27.521726 systemd[1]: Finished ensure-sysext.service. Sep 4 16:09:27.522964 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:09:27.523119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:09:27.529658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:09:27.530880 augenrules[1401]: No rules Sep 4 16:09:27.531031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:09:27.533522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:09:27.533565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:09:27.533602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:09:27.533645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:09:27.536271 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 16:09:27.538338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 16:09:27.538737 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:09:27.538973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:09:27.540645 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:09:27.547650 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:09:27.557026 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:09:27.558136 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:09:27.583327 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 16:09:27.679965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:09:27.683213 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 16:09:27.713067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 16:09:27.715463 systemd-networkd[1425]: lo: Link UP Sep 4 16:09:27.715471 systemd-networkd[1425]: lo: Gained carrier Sep 4 16:09:27.716260 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:09:27.716608 systemd-networkd[1425]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:09:27.716612 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:09:27.717183 systemd-networkd[1425]: eth0: Link UP Sep 4 16:09:27.717531 systemd[1]: Reached target network.target - Network. Sep 4 16:09:27.717536 systemd-networkd[1425]: eth0: Gained carrier Sep 4 16:09:27.717553 systemd-networkd[1425]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:09:27.720432 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 16:09:27.722934 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 16:09:27.724611 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 16:09:27.726558 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 16:09:27.729299 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:09:27.729806 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Sep 4 16:09:27.730685 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 16:09:27.730742 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-09-04 16:09:27.746780 UTC. Sep 4 16:09:27.755151 ldconfig[1366]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 16:09:27.755448 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 16:09:27.761032 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 16:09:27.764086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 16:09:27.783496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:09:27.791413 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 16:09:27.825368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:09:27.827473 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:09:27.828367 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 16:09:27.829291 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 16:09:27.830360 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 16:09:27.831216 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 16:09:27.832137 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 16:09:27.833157 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 16:09:27.833186 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:09:27.834089 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:09:27.835550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 16:09:27.837500 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 16:09:27.839934 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 16:09:27.841121 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 16:09:27.842179 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 16:09:27.845971 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 16:09:27.847166 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 16:09:27.848812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 16:09:27.849758 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:09:27.850530 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:09:27.851270 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:09:27.851300 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:09:27.852139 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 16:09:27.853955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 16:09:27.855628 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 16:09:27.857372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 16:09:27.858989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 16:09:27.859984 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 16:09:27.860842 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 16:09:27.864635 jq[1486]: false Sep 4 16:09:27.864320 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 16:09:27.865844 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 16:09:27.867712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 16:09:27.871557 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 16:09:27.872516 extend-filesystems[1487]: Found /dev/vda6 Sep 4 16:09:27.872747 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 16:09:27.873140 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 16:09:27.874738 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 16:09:27.877196 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 16:09:27.882986 extend-filesystems[1487]: Found /dev/vda9 Sep 4 16:09:27.880824 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 16:09:27.882296 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 16:09:27.882472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 16:09:27.890500 extend-filesystems[1487]: Checking size of /dev/vda9 Sep 4 16:09:27.884591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 16:09:27.884765 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 16:09:27.886108 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 16:09:27.888352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 16:09:27.899342 tar[1508]: linux-arm64/LICENSE Sep 4 16:09:27.900391 tar[1508]: linux-arm64/helm Sep 4 16:09:27.900297 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 16:09:27.904763 jq[1502]: true Sep 4 16:09:27.907918 update_engine[1498]: I20250904 16:09:27.907703 1498 main.cc:92] Flatcar Update Engine starting Sep 4 16:09:27.908588 extend-filesystems[1487]: Resized partition /dev/vda9 Sep 4 16:09:27.911868 extend-filesystems[1530]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 16:09:27.918263 jq[1525]: true Sep 4 16:09:27.920335 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 16:09:27.928023 dbus-daemon[1484]: [system] SELinux support is enabled Sep 4 16:09:27.928254 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 16:09:27.935454 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 16:09:27.936110 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 16:09:27.937749 update_engine[1498]: I20250904 16:09:27.937633 1498 update_check_scheduler.cc:74] Next update check in 2m14s Sep 4 16:09:27.938378 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 16:09:27.938495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 16:09:27.941153 systemd[1]: Started update-engine.service - Update Engine. Sep 4 16:09:27.948706 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 16:09:27.955171 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 16:09:27.962563 systemd-logind[1497]: New seat seat0. Sep 4 16:09:27.963824 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 16:09:27.969275 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 16:09:27.988039 extend-filesystems[1530]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 16:09:27.988039 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 16:09:27.988039 extend-filesystems[1530]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 16:09:28.003071 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Sep 4 16:09:28.003800 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Sep 4 16:09:27.989453 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 16:09:27.989658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 16:09:27.994909 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 16:09:28.014482 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 16:09:28.046170 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 16:09:28.095511 containerd[1513]: time="2025-09-04T16:09:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 16:09:28.096695 containerd[1513]: time="2025-09-04T16:09:28.096641718Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 16:09:28.109449 containerd[1513]: time="2025-09-04T16:09:28.109412799Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.369µs" Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109514862Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109537164Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109671940Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109687116Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109709658Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109756185Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109766396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109935006Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109946978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109958149Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.109966157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110271 containerd[1513]: time="2025-09-04T16:09:28.110033825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110210283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110268782Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110280274Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110309503Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110504380Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 16:09:28.110820 containerd[1513]: time="2025-09-04T16:09:28.110561998Z" level=info msg="metadata content store policy set" policy=shared Sep 4 16:09:28.113432 containerd[1513]: time="2025-09-04T16:09:28.113374355Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 16:09:28.113432 containerd[1513]: time="2025-09-04T16:09:28.113424846Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113438740Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113449751Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113467168Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113478700Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113493034Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113504286Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113516218Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113526428Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113534796Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 16:09:28.113507 containerd[1513]: time="2025-09-04T16:09:28.113546408Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113653236Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113672856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113687150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113696840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113706089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113715859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113726109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113735599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113746250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113756420Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 16:09:28.113804 containerd[1513]: time="2025-09-04T16:09:28.113766470Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 16:09:28.114083 containerd[1513]: time="2025-09-04T16:09:28.113942327Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 16:09:28.114083 containerd[1513]: time="2025-09-04T16:09:28.113956942Z" level=info msg="Start snapshots syncer" Sep 4 16:09:28.114083 containerd[1513]: time="2025-09-04T16:09:28.113986732Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 16:09:28.114389 containerd[1513]: time="2025-09-04T16:09:28.114185573Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 16:09:28.114389 containerd[1513]: time="2025-09-04T16:09:28.114249237Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 16:09:28.114493 containerd[1513]: time="2025-09-04T16:09:28.114343772Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 16:09:28.114493 containerd[1513]: time="2025-09-04T16:09:28.114443113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 16:09:28.114493 containerd[1513]: time="2025-09-04T16:09:28.114466536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 16:09:28.114493 containerd[1513]: time="2025-09-04T16:09:28.114476827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 16:09:28.114493 containerd[1513]: time="2025-09-04T16:09:28.114487918Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114499529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114514184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114527518Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114549460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114562593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 16:09:28.114576 containerd[1513]: time="2025-09-04T16:09:28.114575686Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114605556Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114617689Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114625296Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114634265Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114641633Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114650802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 16:09:28.114677 containerd[1513]: time="2025-09-04T16:09:28.114660572Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 16:09:28.114790 containerd[1513]: time="2025-09-04T16:09:28.114735568Z" level=info msg="runtime interface created" Sep 4 16:09:28.114790 containerd[1513]: time="2025-09-04T16:09:28.114740653Z" level=info msg="created NRI interface" Sep 4 16:09:28.114790 containerd[1513]: time="2025-09-04T16:09:28.114748621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 16:09:28.114790 containerd[1513]: time="2025-09-04T16:09:28.114759392Z" level=info msg="Connect containerd service" Sep 4 16:09:28.114790 containerd[1513]: time="2025-09-04T16:09:28.114786899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 16:09:28.115454 containerd[1513]: time="2025-09-04T16:09:28.115416614Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:09:28.181405 containerd[1513]: time="2025-09-04T16:09:28.181256660Z" level=info msg="Start subscribing containerd event" Sep 4 16:09:28.181405 containerd[1513]: time="2025-09-04T16:09:28.181343067Z" level=info msg="Start recovering state" Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181446852Z" level=info msg="Start event monitor" Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181468754Z" level=info msg="Start cni network conf syncer for default" Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181481167Z" level=info msg="Start streaming server" Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181489015Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181502068Z" level=info msg="runtime interface starting up..." Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181511237Z" level=info msg="starting plugins..." Sep 4 16:09:28.181548 containerd[1513]: time="2025-09-04T16:09:28.181525972Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 16:09:28.182089 containerd[1513]: time="2025-09-04T16:09:28.182036086Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 16:09:28.182258 containerd[1513]: time="2025-09-04T16:09:28.182181153Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 16:09:28.183038 containerd[1513]: time="2025-09-04T16:09:28.183014393Z" level=info msg="containerd successfully booted in 0.087860s" Sep 4 16:09:28.183057 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 16:09:28.271355 tar[1508]: linux-arm64/README.md Sep 4 16:09:28.288533 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 16:09:28.521153 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 16:09:28.539836 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 16:09:28.542273 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 16:09:28.571568 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 16:09:28.571819 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 16:09:28.574207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 16:09:28.599434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 16:09:28.601802 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 16:09:28.603953 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 16:09:28.605048 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 16:09:29.626411 systemd-networkd[1425]: eth0: Gained IPv6LL Sep 4 16:09:29.628671 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 16:09:29.630109 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 16:09:29.633609 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 16:09:29.635750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:29.637589 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 16:09:29.661003 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 16:09:29.661186 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 16:09:29.662700 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 16:09:29.664914 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 16:09:30.184164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:30.185648 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 16:09:30.187559 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:09:30.188109 systemd[1]: Startup finished in 2.028s (kernel) + 5.595s (initrd) + 4.041s (userspace) = 11.664s. Sep 4 16:09:30.521784 kubelet[1622]: E0904 16:09:30.521674 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:09:30.523815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:09:30.523947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:09:30.527441 systemd[1]: kubelet.service: Consumed 736ms CPU time, 256.2M memory peak. Sep 4 16:09:33.867634 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 16:09:33.868633 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:35344.service - OpenSSH per-connection server daemon (10.0.0.1:35344). Sep 4 16:09:33.943292 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 35344 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:33.944871 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:33.950455 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 16:09:33.951394 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 16:09:33.956224 systemd-logind[1497]: New session 1 of user core. Sep 4 16:09:33.972671 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 16:09:33.975959 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 16:09:33.990958 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 16:09:33.992911 systemd-logind[1497]: New session c1 of user core. Sep 4 16:09:34.088282 systemd[1640]: Queued start job for default target default.target. Sep 4 16:09:34.105005 systemd[1640]: Created slice app.slice - User Application Slice. Sep 4 16:09:34.105034 systemd[1640]: Reached target paths.target - Paths. Sep 4 16:09:34.105067 systemd[1640]: Reached target timers.target - Timers. Sep 4 16:09:34.106109 systemd[1640]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 16:09:34.114305 systemd[1640]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 16:09:34.114362 systemd[1640]: Reached target sockets.target - Sockets. Sep 4 16:09:34.114408 systemd[1640]: Reached target basic.target - Basic System. Sep 4 16:09:34.114436 systemd[1640]: Reached target default.target - Main User Target. Sep 4 16:09:34.114467 systemd[1640]: Startup finished in 116ms. Sep 4 16:09:34.114596 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 16:09:34.115678 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 16:09:34.181180 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:35356.service - OpenSSH per-connection server daemon (10.0.0.1:35356). Sep 4 16:09:34.244184 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:34.245323 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:34.249662 systemd-logind[1497]: New session 2 of user core. Sep 4 16:09:34.257368 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 16:09:34.309658 sshd[1654]: Connection closed by 10.0.0.1 port 35356 Sep 4 16:09:34.310094 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 4 16:09:34.325149 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:35356.service: Deactivated successfully. Sep 4 16:09:34.326628 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 16:09:34.328042 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Sep 4 16:09:34.329163 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:35368.service - OpenSSH per-connection server daemon (10.0.0.1:35368). Sep 4 16:09:34.329963 systemd-logind[1497]: Removed session 2. Sep 4 16:09:34.386350 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 35368 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:34.387358 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:34.391510 systemd-logind[1497]: New session 3 of user core. Sep 4 16:09:34.399354 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 16:09:34.447039 sshd[1663]: Connection closed by 10.0.0.1 port 35368 Sep 4 16:09:34.447199 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 4 16:09:34.456926 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:35368.service: Deactivated successfully. Sep 4 16:09:34.458276 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 16:09:34.458952 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Sep 4 16:09:34.460927 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:35382.service - OpenSSH per-connection server daemon (10.0.0.1:35382). Sep 4 16:09:34.462292 systemd-logind[1497]: Removed session 3. Sep 4 16:09:34.512109 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 35382 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:34.513077 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:34.516521 systemd-logind[1497]: New session 4 of user core. Sep 4 16:09:34.537414 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 16:09:34.588272 sshd[1672]: Connection closed by 10.0.0.1 port 35382 Sep 4 16:09:34.588523 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 4 16:09:34.601852 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:35382.service: Deactivated successfully. Sep 4 16:09:34.604368 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 16:09:34.604923 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Sep 4 16:09:34.607453 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:35386.service - OpenSSH per-connection server daemon (10.0.0.1:35386). Sep 4 16:09:34.608369 systemd-logind[1497]: Removed session 4. Sep 4 16:09:34.665080 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:34.666094 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:34.670281 systemd-logind[1497]: New session 5 of user core. Sep 4 16:09:34.681352 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 16:09:34.737260 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 16:09:34.737512 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:09:34.752041 sudo[1682]: pam_unix(sudo:session): session closed for user root Sep 4 16:09:34.753517 sshd[1681]: Connection closed by 10.0.0.1 port 35386 Sep 4 16:09:34.754443 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Sep 4 16:09:34.767984 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:35386.service: Deactivated successfully. Sep 4 16:09:34.770403 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 16:09:34.770991 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Sep 4 16:09:34.773108 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:35400.service - OpenSSH per-connection server daemon (10.0.0.1:35400). Sep 4 16:09:34.773563 systemd-logind[1497]: Removed session 5. Sep 4 16:09:34.835557 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 35400 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:34.836625 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:34.839928 systemd-logind[1497]: New session 6 of user core. Sep 4 16:09:34.849356 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 16:09:34.901757 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 16:09:34.902026 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:09:35.028368 sudo[1693]: pam_unix(sudo:session): session closed for user root Sep 4 16:09:35.034396 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 16:09:35.034646 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:09:35.042513 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:09:35.079966 augenrules[1715]: No rules Sep 4 16:09:35.080938 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:09:35.081126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:09:35.081898 sudo[1692]: pam_unix(sudo:session): session closed for user root Sep 4 16:09:35.083302 sshd[1691]: Connection closed by 10.0.0.1 port 35400 Sep 4 16:09:35.083616 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Sep 4 16:09:35.094922 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:35400.service: Deactivated successfully. Sep 4 16:09:35.097342 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 16:09:35.097896 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Sep 4 16:09:35.099869 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:35416.service - OpenSSH per-connection server daemon (10.0.0.1:35416). Sep 4 16:09:35.100335 systemd-logind[1497]: Removed session 6. Sep 4 16:09:35.153478 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 35416 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:09:35.154471 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:09:35.157785 systemd-logind[1497]: New session 7 of user core. Sep 4 16:09:35.170420 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 16:09:35.219687 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 16:09:35.219925 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:09:35.481735 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 16:09:35.499547 (dockerd)[1749]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 16:09:35.691628 dockerd[1749]: time="2025-09-04T16:09:35.691566044Z" level=info msg="Starting up" Sep 4 16:09:35.692320 dockerd[1749]: time="2025-09-04T16:09:35.692302237Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 16:09:35.701624 dockerd[1749]: time="2025-09-04T16:09:35.701593604Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 16:09:35.734662 dockerd[1749]: time="2025-09-04T16:09:35.734580868Z" level=info msg="Loading containers: start." Sep 4 16:09:35.745256 kernel: Initializing XFRM netlink socket Sep 4 16:09:35.924056 systemd-networkd[1425]: docker0: Link UP Sep 4 16:09:35.927125 dockerd[1749]: time="2025-09-04T16:09:35.927093021Z" level=info msg="Loading containers: done." Sep 4 16:09:35.939931 dockerd[1749]: time="2025-09-04T16:09:35.939886717Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 16:09:35.940044 dockerd[1749]: time="2025-09-04T16:09:35.939959284Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 16:09:35.940044 dockerd[1749]: time="2025-09-04T16:09:35.940029128Z" level=info msg="Initializing buildkit" Sep 4 16:09:35.961860 dockerd[1749]: time="2025-09-04T16:09:35.961824526Z" level=info msg="Completed buildkit initialization" Sep 4 16:09:35.966262 dockerd[1749]: time="2025-09-04T16:09:35.966235038Z" level=info msg="Daemon has completed initialization" Sep 4 16:09:35.966384 dockerd[1749]: time="2025-09-04T16:09:35.966276265Z" level=info msg="API listen on /run/docker.sock" Sep 4 16:09:35.966494 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 16:09:36.477550 containerd[1513]: time="2025-09-04T16:09:36.477502112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 16:09:36.713798 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1244218223-merged.mount: Deactivated successfully. Sep 4 16:09:37.065611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446024649.mount: Deactivated successfully. Sep 4 16:09:38.263826 containerd[1513]: time="2025-09-04T16:09:38.263772755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:38.264670 containerd[1513]: time="2025-09-04T16:09:38.264346618Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 4 16:09:38.265091 containerd[1513]: time="2025-09-04T16:09:38.265066799Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:38.267582 containerd[1513]: time="2025-09-04T16:09:38.267536426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:38.268479 containerd[1513]: time="2025-09-04T16:09:38.268445307Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.790897769s" Sep 4 16:09:38.268534 containerd[1513]: time="2025-09-04T16:09:38.268480446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 4 16:09:38.269141 containerd[1513]: time="2025-09-04T16:09:38.269118264Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 16:09:39.840016 containerd[1513]: time="2025-09-04T16:09:39.839222028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:39.841166 containerd[1513]: time="2025-09-04T16:09:39.841145222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 4 16:09:39.842225 containerd[1513]: time="2025-09-04T16:09:39.842202747Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:39.844920 containerd[1513]: time="2025-09-04T16:09:39.844884358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:39.845822 containerd[1513]: time="2025-09-04T16:09:39.845784524Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.576637846s" Sep 4 16:09:39.845912 containerd[1513]: time="2025-09-04T16:09:39.845897540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 4 16:09:39.846693 containerd[1513]: time="2025-09-04T16:09:39.846661920Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 16:09:40.774414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 16:09:40.775760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:40.909328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:40.912775 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:09:40.941958 kubelet[2034]: E0904 16:09:40.941913 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:09:40.945010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:09:40.945265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:09:40.947318 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107.2M memory peak. Sep 4 16:09:41.348174 containerd[1513]: time="2025-09-04T16:09:41.347324202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:41.348174 containerd[1513]: time="2025-09-04T16:09:41.348143119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 4 16:09:41.348748 containerd[1513]: time="2025-09-04T16:09:41.348714288Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:41.351277 containerd[1513]: time="2025-09-04T16:09:41.351070435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:41.352081 containerd[1513]: time="2025-09-04T16:09:41.351962584Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.505101607s" Sep 4 16:09:41.352081 containerd[1513]: time="2025-09-04T16:09:41.351996839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 4 16:09:41.352430 containerd[1513]: time="2025-09-04T16:09:41.352388370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 16:09:42.259694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251594221.mount: Deactivated successfully. Sep 4 16:09:42.614338 containerd[1513]: time="2025-09-04T16:09:42.614181189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:42.615149 containerd[1513]: time="2025-09-04T16:09:42.614979756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 4 16:09:42.615960 containerd[1513]: time="2025-09-04T16:09:42.615920300Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:42.617655 containerd[1513]: time="2025-09-04T16:09:42.617615473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:42.618271 containerd[1513]: time="2025-09-04T16:09:42.618210317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.265782409s" Sep 4 16:09:42.618362 containerd[1513]: time="2025-09-04T16:09:42.618345132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 16:09:42.618862 containerd[1513]: time="2025-09-04T16:09:42.618839614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 16:09:43.101023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049211030.mount: Deactivated successfully. Sep 4 16:09:43.850850 containerd[1513]: time="2025-09-04T16:09:43.850797157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:43.852185 containerd[1513]: time="2025-09-04T16:09:43.852141153Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 4 16:09:43.852930 containerd[1513]: time="2025-09-04T16:09:43.852893761Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:43.889167 containerd[1513]: time="2025-09-04T16:09:43.889107682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:43.890255 containerd[1513]: time="2025-09-04T16:09:43.890204663Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.271333317s" Sep 4 16:09:43.890310 containerd[1513]: time="2025-09-04T16:09:43.890260884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 16:09:43.890774 containerd[1513]: time="2025-09-04T16:09:43.890754874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 16:09:44.311739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625923671.mount: Deactivated successfully. Sep 4 16:09:44.317408 containerd[1513]: time="2025-09-04T16:09:44.317374952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:09:44.318032 containerd[1513]: time="2025-09-04T16:09:44.317778537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 4 16:09:44.318711 containerd[1513]: time="2025-09-04T16:09:44.318676699Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:09:44.320533 containerd[1513]: time="2025-09-04T16:09:44.320494753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:09:44.321501 containerd[1513]: time="2025-09-04T16:09:44.321449856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 430.667812ms" Sep 4 16:09:44.321501 containerd[1513]: time="2025-09-04T16:09:44.321482468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 16:09:44.322082 containerd[1513]: time="2025-09-04T16:09:44.322060836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 16:09:44.837841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797801619.mount: Deactivated successfully. Sep 4 16:09:47.409185 containerd[1513]: time="2025-09-04T16:09:47.408701956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:47.409556 containerd[1513]: time="2025-09-04T16:09:47.409233033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 4 16:09:47.410216 containerd[1513]: time="2025-09-04T16:09:47.410192837Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:47.413417 containerd[1513]: time="2025-09-04T16:09:47.413382302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:09:47.415009 containerd[1513]: time="2025-09-04T16:09:47.414979495Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.092889289s" Sep 4 16:09:47.415109 containerd[1513]: time="2025-09-04T16:09:47.415092928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 4 16:09:51.195614 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 16:09:51.197351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:51.333847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:51.337350 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:09:51.371554 kubelet[2197]: E0904 16:09:51.371517 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:09:51.373960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:09:51.374165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:09:51.374539 systemd[1]: kubelet.service: Consumed 128ms CPU time, 107.5M memory peak. Sep 4 16:09:52.903084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:52.903242 systemd[1]: kubelet.service: Consumed 128ms CPU time, 107.5M memory peak. Sep 4 16:09:52.905324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:52.925199 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Sep 4 16:09:52.925217 systemd[1]: Reloading... Sep 4 16:09:52.996345 zram_generator::config[2262]: No configuration found. Sep 4 16:09:53.170468 systemd[1]: Reloading finished in 244 ms. Sep 4 16:09:53.206936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:53.208909 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:53.210384 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 16:09:53.212255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:53.212292 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 4 16:09:53.213604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:53.334129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:53.352522 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:09:53.382864 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:09:53.382864 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:09:53.382864 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:09:53.383197 kubelet[2303]: I0904 16:09:53.382861 2303 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:09:53.982510 kubelet[2303]: I0904 16:09:53.982481 2303 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:09:53.983262 kubelet[2303]: I0904 16:09:53.983068 2303 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:09:53.983365 kubelet[2303]: I0904 16:09:53.983345 2303 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:09:54.003508 kubelet[2303]: E0904 16:09:54.003461 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:09:54.004386 kubelet[2303]: I0904 16:09:54.004277 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:09:54.009409 kubelet[2303]: I0904 16:09:54.009378 2303 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:09:54.011892 kubelet[2303]: I0904 16:09:54.011868 2303 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:09:54.012499 kubelet[2303]: I0904 16:09:54.012466 2303 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:09:54.012651 kubelet[2303]: I0904 16:09:54.012496 2303 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:09:54.012731 kubelet[2303]: I0904 16:09:54.012717 2303 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:09:54.012731 kubelet[2303]: I0904 16:09:54.012726 2303 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:09:54.012913 kubelet[2303]: I0904 16:09:54.012890 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:09:54.015301 kubelet[2303]: I0904 16:09:54.015278 2303 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:09:54.015301 kubelet[2303]: I0904 16:09:54.015300 2303 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:09:54.015352 kubelet[2303]: I0904 16:09:54.015323 2303 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:09:54.015352 kubelet[2303]: I0904 16:09:54.015332 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:09:54.019247 kubelet[2303]: W0904 16:09:54.018344 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 4 16:09:54.019247 kubelet[2303]: E0904 16:09:54.018404 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:09:54.019247 kubelet[2303]: W0904 16:09:54.018593 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 4 16:09:54.019247 kubelet[2303]: E0904 16:09:54.018633 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:09:54.019247 kubelet[2303]: I0904 16:09:54.018792 2303 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:09:54.019426 kubelet[2303]: I0904 16:09:54.019399 2303 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:09:54.019535 kubelet[2303]: W0904 16:09:54.019519 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 16:09:54.020468 kubelet[2303]: I0904 16:09:54.020444 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:09:54.020529 kubelet[2303]: I0904 16:09:54.020485 2303 server.go:1287] "Started kubelet" Sep 4 16:09:54.021772 kubelet[2303]: I0904 16:09:54.021739 2303 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:09:54.022831 kubelet[2303]: I0904 16:09:54.022776 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:09:54.023088 kubelet[2303]: I0904 16:09:54.023012 2303 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:09:54.024441 kubelet[2303]: I0904 16:09:54.024417 2303 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:09:54.024982 kubelet[2303]: I0904 16:09:54.024924 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:09:54.025522 kubelet[2303]: E0904 16:09:54.025308 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862202fbc2bebe9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 16:09:54.020461545 +0000 UTC m=+0.665044028,LastTimestamp:2025-09-04 16:09:54.020461545 +0000 UTC m=+0.665044028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 16:09:54.026141 kubelet[2303]: I0904 16:09:54.026116 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:09:54.026247 kubelet[2303]: I0904 16:09:54.026216 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:09:54.026409 kubelet[2303]: I0904 16:09:54.026386 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:09:54.026460 kubelet[2303]: I0904 16:09:54.026447 2303 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:09:54.026748 kubelet[2303]: W0904 16:09:54.026711 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 4 16:09:54.026785 kubelet[2303]: E0904 16:09:54.026754 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:09:54.026873 kubelet[2303]: E0904 16:09:54.026195 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:09:54.027530 kubelet[2303]: E0904 16:09:54.027491 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Sep 4 16:09:54.027655 kubelet[2303]: I0904 16:09:54.027634 2303 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:09:54.027727 kubelet[2303]: I0904 16:09:54.027701 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:09:54.028201 kubelet[2303]: E0904 16:09:54.028176 2303 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:09:54.028425 kubelet[2303]: I0904 16:09:54.028410 2303 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:09:54.038325 kubelet[2303]: I0904 16:09:54.038284 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:09:54.039430 kubelet[2303]: I0904 16:09:54.039207 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:09:54.039430 kubelet[2303]: I0904 16:09:54.039242 2303 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:09:54.039430 kubelet[2303]: I0904 16:09:54.039271 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:09:54.039430 kubelet[2303]: I0904 16:09:54.039281 2303 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:09:54.039430 kubelet[2303]: E0904 16:09:54.039334 2303 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:09:54.043963 kubelet[2303]: W0904 16:09:54.043685 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 4 16:09:54.043963 kubelet[2303]: E0904 16:09:54.043736 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:09:54.044488 kubelet[2303]: I0904 16:09:54.044470 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:09:54.044488 kubelet[2303]: I0904 16:09:54.044487 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:09:54.044563 kubelet[2303]: I0904 16:09:54.044503 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:09:54.126987 kubelet[2303]: E0904 16:09:54.126935 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:09:54.126987 kubelet[2303]: I0904 16:09:54.126978 2303 policy_none.go:49] "None policy: Start" Sep 4 16:09:54.127118 kubelet[2303]: I0904 16:09:54.127000 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:09:54.127118 kubelet[2303]: I0904 16:09:54.127013 2303 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:09:54.132334 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 16:09:54.140051 kubelet[2303]: E0904 16:09:54.140015 2303 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:09:54.147724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 16:09:54.150582 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 16:09:54.158972 kubelet[2303]: I0904 16:09:54.158894 2303 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:09:54.159436 kubelet[2303]: I0904 16:09:54.159052 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:09:54.159436 kubelet[2303]: I0904 16:09:54.159068 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:09:54.159436 kubelet[2303]: I0904 16:09:54.159334 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:09:54.160136 kubelet[2303]: E0904 16:09:54.160112 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:09:54.160196 kubelet[2303]: E0904 16:09:54.160145 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 16:09:54.228433 kubelet[2303]: E0904 16:09:54.228404 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Sep 4 16:09:54.260331 kubelet[2303]: I0904 16:09:54.260226 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:09:54.260653 kubelet[2303]: E0904 16:09:54.260606 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 4 16:09:54.348459 systemd[1]: Created slice kubepods-burstable-pod9aeb6385ab524403bf1228ce9dbd790b.slice - libcontainer container kubepods-burstable-pod9aeb6385ab524403bf1228ce9dbd790b.slice. Sep 4 16:09:54.360914 kubelet[2303]: E0904 16:09:54.360884 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:54.361618 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 16:09:54.363549 kubelet[2303]: E0904 16:09:54.363529 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:54.365225 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 16:09:54.367769 kubelet[2303]: E0904 16:09:54.367750 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:54.428208 kubelet[2303]: I0904 16:09:54.428155 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:54.428538 kubelet[2303]: I0904 16:09:54.428250 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:54.428538 kubelet[2303]: I0904 16:09:54.428288 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:54.428538 kubelet[2303]: I0904 16:09:54.428316 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:54.428538 kubelet[2303]: I0904 16:09:54.428342 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:54.428538 kubelet[2303]: I0904 16:09:54.428360 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:54.428642 kubelet[2303]: I0904 16:09:54.428377 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:54.428642 kubelet[2303]: I0904 16:09:54.428392 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:54.428642 kubelet[2303]: I0904 16:09:54.428406 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:54.462316 kubelet[2303]: I0904 16:09:54.462288 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:09:54.462652 kubelet[2303]: E0904 16:09:54.462629 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 4 16:09:54.629170 kubelet[2303]: E0904 16:09:54.629064 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Sep 4 16:09:54.661462 kubelet[2303]: E0904 16:09:54.661406 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.662106 containerd[1513]: time="2025-09-04T16:09:54.661954951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9aeb6385ab524403bf1228ce9dbd790b,Namespace:kube-system,Attempt:0,}" Sep 4 16:09:54.667253 kubelet[2303]: E0904 16:09:54.667181 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.667812 containerd[1513]: time="2025-09-04T16:09:54.667775328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 16:09:54.668201 kubelet[2303]: E0904 16:09:54.668013 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.668388 containerd[1513]: time="2025-09-04T16:09:54.668343635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 16:09:54.684680 containerd[1513]: time="2025-09-04T16:09:54.684643988Z" level=info msg="connecting to shim 4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23" address="unix:///run/containerd/s/4054070f62f1a6b2dbe146563fe1ae4b7bf95a7efc30975a63fc15377ea50743" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:09:54.693512 containerd[1513]: time="2025-09-04T16:09:54.693468251Z" level=info msg="connecting to shim 65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314" address="unix:///run/containerd/s/b07cca4e0fdfb1df62d9a4ebe33df89fa580d8e4e52b2c33695719d3878b6611" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:09:54.697092 containerd[1513]: time="2025-09-04T16:09:54.697054127Z" level=info msg="connecting to shim f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098" address="unix:///run/containerd/s/d1e1fd098d78045d8ad9378b758b7f6918694529e6e2553af5ec62756f0cc6e7" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:09:54.714372 systemd[1]: Started cri-containerd-4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23.scope - libcontainer container 4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23. Sep 4 16:09:54.717298 systemd[1]: Started cri-containerd-65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314.scope - libcontainer container 65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314. Sep 4 16:09:54.732373 systemd[1]: Started cri-containerd-f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098.scope - libcontainer container f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098. Sep 4 16:09:54.753833 containerd[1513]: time="2025-09-04T16:09:54.753786741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9aeb6385ab524403bf1228ce9dbd790b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23\"" Sep 4 16:09:54.754631 kubelet[2303]: E0904 16:09:54.754607 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.757995 containerd[1513]: time="2025-09-04T16:09:54.757965769Z" level=info msg="CreateContainer within sandbox \"4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 16:09:54.764692 containerd[1513]: time="2025-09-04T16:09:54.764649829Z" level=info msg="Container fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:09:54.768506 containerd[1513]: time="2025-09-04T16:09:54.768475790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314\"" Sep 4 16:09:54.769089 kubelet[2303]: E0904 16:09:54.769069 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.770453 containerd[1513]: time="2025-09-04T16:09:54.770427118Z" level=info msg="CreateContainer within sandbox \"65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 16:09:54.773274 containerd[1513]: time="2025-09-04T16:09:54.772782002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098\"" Sep 4 16:09:54.773983 containerd[1513]: time="2025-09-04T16:09:54.773954863Z" level=info msg="CreateContainer within sandbox \"4edcc996b01bfed3c5fb741b6299d0310504a791e2d929ca6db626e8043b0e23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e\"" Sep 4 16:09:54.774453 containerd[1513]: time="2025-09-04T16:09:54.774427712Z" level=info msg="StartContainer for \"fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e\"" Sep 4 16:09:54.775980 containerd[1513]: time="2025-09-04T16:09:54.775943958Z" level=info msg="connecting to shim fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e" address="unix:///run/containerd/s/4054070f62f1a6b2dbe146563fe1ae4b7bf95a7efc30975a63fc15377ea50743" protocol=ttrpc version=3 Sep 4 16:09:54.776671 kubelet[2303]: E0904 16:09:54.776635 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:54.778952 containerd[1513]: time="2025-09-04T16:09:54.778907437Z" level=info msg="CreateContainer within sandbox \"f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 16:09:54.779134 containerd[1513]: time="2025-09-04T16:09:54.779110155Z" level=info msg="Container bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:09:54.791836 containerd[1513]: time="2025-09-04T16:09:54.791748737Z" level=info msg="CreateContainer within sandbox \"65c8dec558ef66ebd470ad6d3eee8a638c09c444636e714bfb897b38f6f8d314\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097\"" Sep 4 16:09:54.792267 containerd[1513]: time="2025-09-04T16:09:54.792219906Z" level=info msg="StartContainer for \"bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097\"" Sep 4 16:09:54.792697 containerd[1513]: time="2025-09-04T16:09:54.792671511Z" level=info msg="Container 1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:09:54.793942 containerd[1513]: time="2025-09-04T16:09:54.793512990Z" level=info msg="connecting to shim bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097" address="unix:///run/containerd/s/b07cca4e0fdfb1df62d9a4ebe33df89fa580d8e4e52b2c33695719d3878b6611" protocol=ttrpc version=3 Sep 4 16:09:54.800362 containerd[1513]: time="2025-09-04T16:09:54.800324634Z" level=info msg="CreateContainer within sandbox \"f671d54bafe52723a8639a8d880986d3701a35553d0a1c35b12f8d174f055098\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5\"" Sep 4 16:09:54.800919 containerd[1513]: time="2025-09-04T16:09:54.800891941Z" level=info msg="StartContainer for \"1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5\"" Sep 4 16:09:54.802353 containerd[1513]: time="2025-09-04T16:09:54.802330132Z" level=info msg="connecting to shim 1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5" address="unix:///run/containerd/s/d1e1fd098d78045d8ad9378b758b7f6918694529e6e2553af5ec62756f0cc6e7" protocol=ttrpc version=3 Sep 4 16:09:54.803424 systemd[1]: Started cri-containerd-fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e.scope - libcontainer container fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e. Sep 4 16:09:54.813419 systemd[1]: Started cri-containerd-bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097.scope - libcontainer container bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097. Sep 4 16:09:54.824410 systemd[1]: Started cri-containerd-1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5.scope - libcontainer container 1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5. Sep 4 16:09:54.854454 containerd[1513]: time="2025-09-04T16:09:54.854320613Z" level=info msg="StartContainer for \"fed19e06370f47884d652cf3e1714c839cc0b4ef506251cee509b807326c0e2e\" returns successfully" Sep 4 16:09:54.864469 containerd[1513]: time="2025-09-04T16:09:54.864431319Z" level=info msg="StartContainer for \"bae0686ec6a9cf93cd96979b108de6257724f362460c3287c5105ed68beb0097\" returns successfully" Sep 4 16:09:54.865543 kubelet[2303]: I0904 16:09:54.865508 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:09:54.865938 kubelet[2303]: E0904 16:09:54.865880 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 4 16:09:54.871128 containerd[1513]: time="2025-09-04T16:09:54.871100056Z" level=info msg="StartContainer for \"1249c497801fae86dcca7c0efda23a15285baba55566b783f1308f62360148f5\" returns successfully" Sep 4 16:09:55.050364 kubelet[2303]: E0904 16:09:55.049641 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:55.050364 kubelet[2303]: E0904 16:09:55.049756 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:55.052389 kubelet[2303]: E0904 16:09:55.052371 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:55.052494 kubelet[2303]: E0904 16:09:55.052471 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:55.054711 kubelet[2303]: E0904 16:09:55.054545 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:55.054711 kubelet[2303]: E0904 16:09:55.054641 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:55.667217 kubelet[2303]: I0904 16:09:55.667181 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:09:56.057146 kubelet[2303]: E0904 16:09:56.056778 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:56.057146 kubelet[2303]: E0904 16:09:56.056901 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:56.057146 kubelet[2303]: E0904 16:09:56.056949 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:09:56.057146 kubelet[2303]: E0904 16:09:56.057048 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:56.428097 kubelet[2303]: E0904 16:09:56.427750 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 16:09:56.523261 kubelet[2303]: I0904 16:09:56.522741 2303 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:09:56.523261 kubelet[2303]: E0904 16:09:56.522777 2303 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 16:09:56.527132 kubelet[2303]: I0904 16:09:56.527089 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:56.582429 kubelet[2303]: E0904 16:09:56.582387 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:56.582429 kubelet[2303]: I0904 16:09:56.582416 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:56.584865 kubelet[2303]: E0904 16:09:56.584839 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:56.584865 kubelet[2303]: I0904 16:09:56.584864 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:56.587725 kubelet[2303]: E0904 16:09:56.587694 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:57.019483 kubelet[2303]: I0904 16:09:57.019448 2303 apiserver.go:52] "Watching apiserver" Sep 4 16:09:57.026850 kubelet[2303]: I0904 16:09:57.026810 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:09:57.187724 kubelet[2303]: I0904 16:09:57.186401 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:57.188298 kubelet[2303]: E0904 16:09:57.188254 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:57.188428 kubelet[2303]: E0904 16:09:57.188400 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:58.109392 kubelet[2303]: I0904 16:09:58.109324 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:58.113725 kubelet[2303]: E0904 16:09:58.113704 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:58.271512 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... Sep 4 16:09:58.271526 systemd[1]: Reloading... Sep 4 16:09:58.327754 zram_generator::config[2625]: No configuration found. Sep 4 16:09:58.491061 systemd[1]: Reloading finished in 219 ms. Sep 4 16:09:58.510578 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:58.518553 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 16:09:58.518776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:58.518819 systemd[1]: kubelet.service: Consumed 1.005s CPU time, 127.8M memory peak. Sep 4 16:09:58.520794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:09:58.657662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:09:58.661869 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:09:58.704772 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:09:58.704772 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:09:58.704772 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:09:58.705083 kubelet[2667]: I0904 16:09:58.704818 2667 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:09:58.710909 kubelet[2667]: I0904 16:09:58.710864 2667 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:09:58.710909 kubelet[2667]: I0904 16:09:58.710891 2667 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:09:58.711144 kubelet[2667]: I0904 16:09:58.711106 2667 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:09:58.712291 kubelet[2667]: I0904 16:09:58.712275 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 16:09:58.714368 kubelet[2667]: I0904 16:09:58.714348 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:09:58.718144 kubelet[2667]: I0904 16:09:58.718113 2667 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:09:58.720659 kubelet[2667]: I0904 16:09:58.720640 2667 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:09:58.720818 kubelet[2667]: I0904 16:09:58.720798 2667 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:09:58.720960 kubelet[2667]: I0904 16:09:58.720819 2667 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:09:58.721036 kubelet[2667]: I0904 16:09:58.720969 2667 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:09:58.721036 kubelet[2667]: I0904 16:09:58.720978 2667 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:09:58.721036 kubelet[2667]: I0904 16:09:58.721015 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:09:58.721143 kubelet[2667]: I0904 16:09:58.721133 2667 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:09:58.721168 kubelet[2667]: I0904 16:09:58.721147 2667 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:09:58.721168 kubelet[2667]: I0904 16:09:58.721165 2667 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:09:58.721220 kubelet[2667]: I0904 16:09:58.721174 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:09:58.722192 kubelet[2667]: I0904 16:09:58.722154 2667 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:09:58.722760 kubelet[2667]: I0904 16:09:58.722742 2667 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:09:58.723249 kubelet[2667]: I0904 16:09:58.723172 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:09:58.723249 kubelet[2667]: I0904 16:09:58.723209 2667 server.go:1287] "Started kubelet" Sep 4 16:09:58.725533 kubelet[2667]: I0904 16:09:58.725495 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:09:58.725652 kubelet[2667]: I0904 16:09:58.725632 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:09:58.725980 kubelet[2667]: I0904 16:09:58.725958 2667 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:09:58.727169 kubelet[2667]: I0904 16:09:58.727141 2667 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:09:58.727495 kubelet[2667]: I0904 16:09:58.727479 2667 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:09:58.731432 kubelet[2667]: I0904 16:09:58.731402 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:09:58.732249 kubelet[2667]: I0904 16:09:58.731731 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:09:58.732249 kubelet[2667]: I0904 16:09:58.731997 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:09:58.732249 kubelet[2667]: I0904 16:09:58.732097 2667 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:09:58.732249 kubelet[2667]: E0904 16:09:58.732122 2667 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:09:58.737363 kubelet[2667]: E0904 16:09:58.737334 2667 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:09:58.742073 kubelet[2667]: I0904 16:09:58.741989 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:09:58.742976 kubelet[2667]: I0904 16:09:58.742946 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:09:58.742976 kubelet[2667]: I0904 16:09:58.742969 2667 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:09:58.743070 kubelet[2667]: I0904 16:09:58.742985 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:09:58.743070 kubelet[2667]: I0904 16:09:58.742991 2667 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:09:58.743070 kubelet[2667]: E0904 16:09:58.743034 2667 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:09:58.748186 kubelet[2667]: I0904 16:09:58.748143 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:09:58.751945 kubelet[2667]: I0904 16:09:58.751739 2667 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:09:58.751945 kubelet[2667]: I0904 16:09:58.751758 2667 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:09:58.777577 kubelet[2667]: I0904 16:09:58.777554 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:09:58.777577 kubelet[2667]: I0904 16:09:58.777574 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:09:58.777676 kubelet[2667]: I0904 16:09:58.777591 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:09:58.777736 kubelet[2667]: I0904 16:09:58.777721 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 16:09:58.777760 kubelet[2667]: I0904 16:09:58.777735 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 16:09:58.777760 kubelet[2667]: I0904 16:09:58.777752 2667 policy_none.go:49] "None policy: Start" Sep 4 16:09:58.777760 kubelet[2667]: I0904 16:09:58.777760 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:09:58.777816 kubelet[2667]: I0904 16:09:58.777768 2667 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:09:58.777867 kubelet[2667]: I0904 16:09:58.777857 2667 state_mem.go:75] "Updated machine memory state" Sep 4 16:09:58.781997 kubelet[2667]: I0904 16:09:58.781701 2667 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:09:58.782152 kubelet[2667]: I0904 16:09:58.782136 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:09:58.782180 kubelet[2667]: I0904 16:09:58.782150 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:09:58.782541 kubelet[2667]: I0904 16:09:58.782515 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:09:58.783590 kubelet[2667]: E0904 16:09:58.783558 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:09:58.843731 kubelet[2667]: I0904 16:09:58.843693 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:58.843731 kubelet[2667]: I0904 16:09:58.843731 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:58.844060 kubelet[2667]: I0904 16:09:58.843851 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.849384 kubelet[2667]: E0904 16:09:58.849358 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:58.887838 kubelet[2667]: I0904 16:09:58.887814 2667 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:09:58.893811 kubelet[2667]: I0904 16:09:58.893790 2667 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 16:09:58.893886 kubelet[2667]: I0904 16:09:58.893855 2667 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:09:58.934296 kubelet[2667]: I0904 16:09:58.934181 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.934296 kubelet[2667]: I0904 16:09:58.934225 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.934296 kubelet[2667]: I0904 16:09:58.934259 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.934296 kubelet[2667]: I0904 16:09:58.934276 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.934416 kubelet[2667]: I0904 16:09:58.934311 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:09:58.934416 kubelet[2667]: I0904 16:09:58.934370 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:58.934416 kubelet[2667]: I0904 16:09:58.934388 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:58.934490 kubelet[2667]: I0904 16:09:58.934427 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:58.934490 kubelet[2667]: I0904 16:09:58.934445 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9aeb6385ab524403bf1228ce9dbd790b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9aeb6385ab524403bf1228ce9dbd790b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:59.149607 kubelet[2667]: E0904 16:09:59.149465 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.149607 kubelet[2667]: E0904 16:09:59.149522 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.149727 kubelet[2667]: E0904 16:09:59.149655 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.272558 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 16:09:59.272799 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 16:09:59.583348 sudo[2705]: pam_unix(sudo:session): session closed for user root Sep 4 16:09:59.721911 kubelet[2667]: I0904 16:09:59.721692 2667 apiserver.go:52] "Watching apiserver" Sep 4 16:09:59.732281 kubelet[2667]: I0904 16:09:59.732220 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:09:59.764436 kubelet[2667]: I0904 16:09:59.763794 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:59.764593 kubelet[2667]: E0904 16:09:59.763965 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.766019 kubelet[2667]: I0904 16:09:59.763994 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:59.770307 kubelet[2667]: E0904 16:09:59.770059 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:09:59.770307 kubelet[2667]: E0904 16:09:59.770223 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.773295 kubelet[2667]: E0904 16:09:59.773263 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 16:09:59.773409 kubelet[2667]: E0904 16:09:59.773389 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:09:59.785700 kubelet[2667]: I0904 16:09:59.785656 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.785642159 podStartE2EDuration="1.785642159s" podCreationTimestamp="2025-09-04 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:09:59.785515822 +0000 UTC m=+1.119848700" watchObservedRunningTime="2025-09-04 16:09:59.785642159 +0000 UTC m=+1.119975037" Sep 4 16:09:59.800246 kubelet[2667]: I0904 16:09:59.800153 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.800136578 podStartE2EDuration="1.800136578s" podCreationTimestamp="2025-09-04 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:09:59.792978521 +0000 UTC m=+1.127311399" watchObservedRunningTime="2025-09-04 16:09:59.800136578 +0000 UTC m=+1.134469456" Sep 4 16:09:59.800363 kubelet[2667]: I0904 16:09:59.800265 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.800259955 podStartE2EDuration="1.800259955s" podCreationTimestamp="2025-09-04 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:09:59.799224694 +0000 UTC m=+1.133557572" watchObservedRunningTime="2025-09-04 16:09:59.800259955 +0000 UTC m=+1.134592873" Sep 4 16:10:00.765829 kubelet[2667]: E0904 16:10:00.765796 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:00.766125 sudo[1728]: pam_unix(sudo:session): session closed for user root Sep 4 16:10:00.766978 kubelet[2667]: E0904 16:10:00.766916 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:00.768822 sshd[1727]: Connection closed by 10.0.0.1 port 35416 Sep 4 16:10:00.769266 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:00.773887 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:35416.service: Deactivated successfully. Sep 4 16:10:00.775813 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 16:10:00.776036 systemd[1]: session-7.scope: Consumed 7.022s CPU time, 260.3M memory peak. Sep 4 16:10:00.777635 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Sep 4 16:10:00.778845 systemd-logind[1497]: Removed session 7. Sep 4 16:10:01.767434 kubelet[2667]: E0904 16:10:01.767399 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:02.375258 kubelet[2667]: I0904 16:10:02.375089 2667 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 16:10:02.375565 containerd[1513]: time="2025-09-04T16:10:02.375514226Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 16:10:02.375832 kubelet[2667]: I0904 16:10:02.375699 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 16:10:02.978918 kubelet[2667]: E0904 16:10:02.978881 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.028940 systemd[1]: Created slice kubepods-besteffort-pod548507b2_a998_47b6_8c96_9969cd39e406.slice - libcontainer container kubepods-besteffort-pod548507b2_a998_47b6_8c96_9969cd39e406.slice. Sep 4 16:10:03.052200 systemd[1]: Created slice kubepods-burstable-pod2010a036_eac1_45ed_a4f5_e949ffe4d1d4.slice - libcontainer container kubepods-burstable-pod2010a036_eac1_45ed_a4f5_e949ffe4d1d4.slice. Sep 4 16:10:03.060472 kubelet[2667]: I0904 16:10:03.060434 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-clustermesh-secrets\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060472 kubelet[2667]: I0904 16:10:03.060474 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-lib-modules\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060626 kubelet[2667]: I0904 16:10:03.060491 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-net\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060626 kubelet[2667]: I0904 16:10:03.060507 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/548507b2-a998-47b6-8c96-9969cd39e406-lib-modules\") pod \"kube-proxy-qvqbm\" (UID: \"548507b2-a998-47b6-8c96-9969cd39e406\") " pod="kube-system/kube-proxy-qvqbm" Sep 4 16:10:03.060626 kubelet[2667]: I0904 16:10:03.060522 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-cgroup\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060626 kubelet[2667]: I0904 16:10:03.060536 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-xtables-lock\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060626 kubelet[2667]: I0904 16:10:03.060564 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/548507b2-a998-47b6-8c96-9969cd39e406-kube-proxy\") pod \"kube-proxy-qvqbm\" (UID: \"548507b2-a998-47b6-8c96-9969cd39e406\") " pod="kube-system/kube-proxy-qvqbm" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060587 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlh72\" (UniqueName: \"kubernetes.io/projected/548507b2-a998-47b6-8c96-9969cd39e406-kube-api-access-wlh72\") pod \"kube-proxy-qvqbm\" (UID: \"548507b2-a998-47b6-8c96-9969cd39e406\") " pod="kube-system/kube-proxy-qvqbm" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060605 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cni-path\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060621 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-etc-cni-netd\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060644 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/548507b2-a998-47b6-8c96-9969cd39e406-xtables-lock\") pod \"kube-proxy-qvqbm\" (UID: \"548507b2-a998-47b6-8c96-9969cd39e406\") " pod="kube-system/kube-proxy-qvqbm" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060660 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-run\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060732 kubelet[2667]: I0904 16:10:03.060692 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-bpf-maps\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060841 kubelet[2667]: I0904 16:10:03.060708 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-config-path\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060841 kubelet[2667]: I0904 16:10:03.060728 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-kernel\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060841 kubelet[2667]: I0904 16:10:03.060744 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hostproc\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060841 kubelet[2667]: I0904 16:10:03.060758 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hubble-tls\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.060841 kubelet[2667]: I0904 16:10:03.060773 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5844\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-kube-api-access-s5844\") pod \"cilium-prgr9\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " pod="kube-system/cilium-prgr9" Sep 4 16:10:03.341528 kubelet[2667]: E0904 16:10:03.341384 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.342059 containerd[1513]: time="2025-09-04T16:10:03.342017994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvqbm,Uid:548507b2-a998-47b6-8c96-9969cd39e406,Namespace:kube-system,Attempt:0,}" Sep 4 16:10:03.355028 kubelet[2667]: E0904 16:10:03.354987 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.356093 containerd[1513]: time="2025-09-04T16:10:03.356058875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prgr9,Uid:2010a036-eac1-45ed-a4f5-e949ffe4d1d4,Namespace:kube-system,Attempt:0,}" Sep 4 16:10:03.361242 containerd[1513]: time="2025-09-04T16:10:03.361203657Z" level=info msg="connecting to shim b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738" address="unix:///run/containerd/s/cb03e0e68f9960cb39d78619d876bbc76dfb4f32d00aab92ff37ab35ff811178" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:10:03.374329 containerd[1513]: time="2025-09-04T16:10:03.374221310Z" level=info msg="connecting to shim 076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:10:03.383376 systemd[1]: Started cri-containerd-b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738.scope - libcontainer container b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738. Sep 4 16:10:03.401748 systemd[1]: Started cri-containerd-076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32.scope - libcontainer container 076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32. Sep 4 16:10:03.412382 systemd[1]: Created slice kubepods-besteffort-podef442874_a7ef_43ac_965f_6746506b46fc.slice - libcontainer container kubepods-besteffort-podef442874_a7ef_43ac_965f_6746506b46fc.slice. Sep 4 16:10:03.435036 containerd[1513]: time="2025-09-04T16:10:03.434864306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvqbm,Uid:548507b2-a998-47b6-8c96-9969cd39e406,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738\"" Sep 4 16:10:03.437038 kubelet[2667]: E0904 16:10:03.436975 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.440843 containerd[1513]: time="2025-09-04T16:10:03.440665277Z" level=info msg="CreateContainer within sandbox \"b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 16:10:03.450295 containerd[1513]: time="2025-09-04T16:10:03.450262770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prgr9,Uid:2010a036-eac1-45ed-a4f5-e949ffe4d1d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\"" Sep 4 16:10:03.451259 kubelet[2667]: E0904 16:10:03.451139 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.451975 containerd[1513]: time="2025-09-04T16:10:03.451951628Z" level=info msg="Container 6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:03.453149 containerd[1513]: time="2025-09-04T16:10:03.452998778Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 16:10:03.459901 containerd[1513]: time="2025-09-04T16:10:03.459860662Z" level=info msg="CreateContainer within sandbox \"b6ad0b5890c4dc8e33618a6b09898f7e0c43d964a64ad2481d112cbe20276738\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a\"" Sep 4 16:10:03.460475 containerd[1513]: time="2025-09-04T16:10:03.460411520Z" level=info msg="StartContainer for \"6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a\"" Sep 4 16:10:03.462281 containerd[1513]: time="2025-09-04T16:10:03.462120540Z" level=info msg="connecting to shim 6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a" address="unix:///run/containerd/s/cb03e0e68f9960cb39d78619d876bbc76dfb4f32d00aab92ff37ab35ff811178" protocol=ttrpc version=3 Sep 4 16:10:03.464500 kubelet[2667]: I0904 16:10:03.464464 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6zt\" (UniqueName: \"kubernetes.io/projected/ef442874-a7ef-43ac-965f-6746506b46fc-kube-api-access-jq6zt\") pod \"cilium-operator-6c4d7847fc-t5c5v\" (UID: \"ef442874-a7ef-43ac-965f-6746506b46fc\") " pod="kube-system/cilium-operator-6c4d7847fc-t5c5v" Sep 4 16:10:03.464549 kubelet[2667]: I0904 16:10:03.464510 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef442874-a7ef-43ac-965f-6746506b46fc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-t5c5v\" (UID: \"ef442874-a7ef-43ac-965f-6746506b46fc\") " pod="kube-system/cilium-operator-6c4d7847fc-t5c5v" Sep 4 16:10:03.488396 systemd[1]: Started cri-containerd-6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a.scope - libcontainer container 6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a. Sep 4 16:10:03.519806 containerd[1513]: time="2025-09-04T16:10:03.519768420Z" level=info msg="StartContainer for \"6a62d0fa08f9a4cf5e5eb362876917a1953794d92e443014f01de6414b3ca05a\" returns successfully" Sep 4 16:10:03.716877 kubelet[2667]: E0904 16:10:03.716781 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.717746 containerd[1513]: time="2025-09-04T16:10:03.717687213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t5c5v,Uid:ef442874-a7ef-43ac-965f-6746506b46fc,Namespace:kube-system,Attempt:0,}" Sep 4 16:10:03.734432 containerd[1513]: time="2025-09-04T16:10:03.734391975Z" level=info msg="connecting to shim 5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda" address="unix:///run/containerd/s/3f99b075de881b93b9010a5c7334744ec8d820791acffd556aeccced295bb374" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:10:03.754384 systemd[1]: Started cri-containerd-5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda.scope - libcontainer container 5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda. Sep 4 16:10:03.774177 kubelet[2667]: E0904 16:10:03.774153 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:03.790586 containerd[1513]: time="2025-09-04T16:10:03.790529656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-t5c5v,Uid:ef442874-a7ef-43ac-965f-6746506b46fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\"" Sep 4 16:10:03.792044 kubelet[2667]: E0904 16:10:03.792020 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:06.148364 kubelet[2667]: E0904 16:10:06.148337 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:06.164217 kubelet[2667]: I0904 16:10:06.164058 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvqbm" podStartSLOduration=4.164040028 podStartE2EDuration="4.164040028s" podCreationTimestamp="2025-09-04 16:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:10:03.786369257 +0000 UTC m=+5.120702135" watchObservedRunningTime="2025-09-04 16:10:06.164040028 +0000 UTC m=+7.498372906" Sep 4 16:10:06.778831 kubelet[2667]: E0904 16:10:06.778797 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:10.589161 kubelet[2667]: E0904 16:10:10.588792 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:10.786027 kubelet[2667]: E0904 16:10:10.785997 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:12.987733 kubelet[2667]: E0904 16:10:12.987696 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:13.496741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913109568.mount: Deactivated successfully. Sep 4 16:10:13.524873 update_engine[1498]: I20250904 16:10:13.524806 1498 update_attempter.cc:509] Updating boot flags... Sep 4 16:10:14.843385 containerd[1513]: time="2025-09-04T16:10:14.843333845Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:10:14.843786 containerd[1513]: time="2025-09-04T16:10:14.843747786Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 16:10:14.844671 containerd[1513]: time="2025-09-04T16:10:14.844640072Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:10:14.846285 containerd[1513]: time="2025-09-04T16:10:14.845941020Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.392785985s" Sep 4 16:10:14.846285 containerd[1513]: time="2025-09-04T16:10:14.845972181Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 16:10:14.856823 containerd[1513]: time="2025-09-04T16:10:14.856780942Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 16:10:14.865253 containerd[1513]: time="2025-09-04T16:10:14.865171137Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 16:10:14.871214 containerd[1513]: time="2025-09-04T16:10:14.870624740Z" level=info msg="Container cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:14.873935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741905645.mount: Deactivated successfully. Sep 4 16:10:14.905096 containerd[1513]: time="2025-09-04T16:10:14.904974201Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\"" Sep 4 16:10:14.907020 containerd[1513]: time="2025-09-04T16:10:14.906885980Z" level=info msg="StartContainer for \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\"" Sep 4 16:10:14.907822 containerd[1513]: time="2025-09-04T16:10:14.907789867Z" level=info msg="connecting to shim cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" protocol=ttrpc version=3 Sep 4 16:10:14.946400 systemd[1]: Started cri-containerd-cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a.scope - libcontainer container cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a. Sep 4 16:10:14.974673 containerd[1513]: time="2025-09-04T16:10:14.974638054Z" level=info msg="StartContainer for \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" returns successfully" Sep 4 16:10:14.986258 systemd[1]: cri-containerd-cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a.scope: Deactivated successfully. Sep 4 16:10:15.005326 containerd[1513]: time="2025-09-04T16:10:15.005276989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" id:\"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" pid:3110 exited_at:{seconds:1757002215 nanos:4861049}" Sep 4 16:10:15.009169 containerd[1513]: time="2025-09-04T16:10:15.009120856Z" level=info msg="received exit event container_id:\"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" id:\"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" pid:3110 exited_at:{seconds:1757002215 nanos:4861049}" Sep 4 16:10:15.046881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a-rootfs.mount: Deactivated successfully. Sep 4 16:10:15.839925 kubelet[2667]: E0904 16:10:15.839871 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:15.843629 containerd[1513]: time="2025-09-04T16:10:15.843467179Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 16:10:15.852850 containerd[1513]: time="2025-09-04T16:10:15.852262127Z" level=info msg="Container ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:15.858696 containerd[1513]: time="2025-09-04T16:10:15.858658078Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\"" Sep 4 16:10:15.860313 containerd[1513]: time="2025-09-04T16:10:15.859275628Z" level=info msg="StartContainer for \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\"" Sep 4 16:10:15.862176 containerd[1513]: time="2025-09-04T16:10:15.862137327Z" level=info msg="connecting to shim ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" protocol=ttrpc version=3 Sep 4 16:10:15.885425 systemd[1]: Started cri-containerd-ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c.scope - libcontainer container ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c. Sep 4 16:10:15.915937 containerd[1513]: time="2025-09-04T16:10:15.915895741Z" level=info msg="StartContainer for \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" returns successfully" Sep 4 16:10:15.923203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:10:15.923788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:10:15.924018 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:10:15.926527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:10:15.928447 systemd[1]: cri-containerd-ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c.scope: Deactivated successfully. Sep 4 16:10:15.933883 containerd[1513]: time="2025-09-04T16:10:15.933843933Z" level=info msg="received exit event container_id:\"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" id:\"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" pid:3153 exited_at:{seconds:1757002215 nanos:933652724}" Sep 4 16:10:15.934083 containerd[1513]: time="2025-09-04T16:10:15.934057984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" id:\"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" pid:3153 exited_at:{seconds:1757002215 nanos:933652724}" Sep 4 16:10:15.943066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994645821.mount: Deactivated successfully. Sep 4 16:10:15.955383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:10:15.957061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c-rootfs.mount: Deactivated successfully. Sep 4 16:10:16.848292 kubelet[2667]: E0904 16:10:16.848124 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:16.851117 containerd[1513]: time="2025-09-04T16:10:16.851081304Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 16:10:16.859258 containerd[1513]: time="2025-09-04T16:10:16.858773334Z" level=info msg="Container 25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:16.867558 containerd[1513]: time="2025-09-04T16:10:16.867522213Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\"" Sep 4 16:10:16.868410 containerd[1513]: time="2025-09-04T16:10:16.868376292Z" level=info msg="StartContainer for \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\"" Sep 4 16:10:16.869940 containerd[1513]: time="2025-09-04T16:10:16.869916842Z" level=info msg="connecting to shim 25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" protocol=ttrpc version=3 Sep 4 16:10:16.896392 systemd[1]: Started cri-containerd-25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca.scope - libcontainer container 25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca. Sep 4 16:10:16.930788 systemd[1]: cri-containerd-25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca.scope: Deactivated successfully. Sep 4 16:10:16.934720 containerd[1513]: time="2025-09-04T16:10:16.934630712Z" level=info msg="StartContainer for \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" returns successfully" Sep 4 16:10:16.942392 containerd[1513]: time="2025-09-04T16:10:16.942277460Z" level=info msg="received exit event container_id:\"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" id:\"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" pid:3212 exited_at:{seconds:1757002216 nanos:941982807}" Sep 4 16:10:16.942585 containerd[1513]: time="2025-09-04T16:10:16.942545993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" id:\"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" pid:3212 exited_at:{seconds:1757002216 nanos:941982807}" Sep 4 16:10:16.960251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca-rootfs.mount: Deactivated successfully. Sep 4 16:10:17.197289 containerd[1513]: time="2025-09-04T16:10:17.197179400Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:10:17.198091 containerd[1513]: time="2025-09-04T16:10:17.197962554Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 16:10:17.198785 containerd[1513]: time="2025-09-04T16:10:17.198747947Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:10:17.200055 containerd[1513]: time="2025-09-04T16:10:17.200024082Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.343200297s" Sep 4 16:10:17.200113 containerd[1513]: time="2025-09-04T16:10:17.200053683Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 16:10:17.203011 containerd[1513]: time="2025-09-04T16:10:17.202970208Z" level=info msg="CreateContainer within sandbox \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 16:10:17.209182 containerd[1513]: time="2025-09-04T16:10:17.208628689Z" level=info msg="Container 935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:17.214754 containerd[1513]: time="2025-09-04T16:10:17.214719670Z" level=info msg="CreateContainer within sandbox \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\"" Sep 4 16:10:17.215543 containerd[1513]: time="2025-09-04T16:10:17.215195970Z" level=info msg="StartContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\"" Sep 4 16:10:17.216131 containerd[1513]: time="2025-09-04T16:10:17.216102009Z" level=info msg="connecting to shim 935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb" address="unix:///run/containerd/s/3f99b075de881b93b9010a5c7334744ec8d820791acffd556aeccced295bb374" protocol=ttrpc version=3 Sep 4 16:10:17.233448 systemd[1]: Started cri-containerd-935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb.scope - libcontainer container 935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb. Sep 4 16:10:17.275440 containerd[1513]: time="2025-09-04T16:10:17.275346700Z" level=info msg="StartContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" returns successfully" Sep 4 16:10:17.854249 kubelet[2667]: E0904 16:10:17.853476 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:17.856127 kubelet[2667]: E0904 16:10:17.856083 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:17.856438 containerd[1513]: time="2025-09-04T16:10:17.856397488Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 16:10:17.898421 containerd[1513]: time="2025-09-04T16:10:17.896548684Z" level=info msg="Container eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:17.905265 containerd[1513]: time="2025-09-04T16:10:17.905144091Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\"" Sep 4 16:10:17.906252 containerd[1513]: time="2025-09-04T16:10:17.905981247Z" level=info msg="StartContainer for \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\"" Sep 4 16:10:17.906918 containerd[1513]: time="2025-09-04T16:10:17.906874965Z" level=info msg="connecting to shim eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" protocol=ttrpc version=3 Sep 4 16:10:17.929377 systemd[1]: Started cri-containerd-eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa.scope - libcontainer container eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa. Sep 4 16:10:17.968575 systemd[1]: cri-containerd-eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa.scope: Deactivated successfully. Sep 4 16:10:17.970863 containerd[1513]: time="2025-09-04T16:10:17.970742854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" id:\"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" pid:3294 exited_at:{seconds:1757002217 nanos:968717528}" Sep 4 16:10:17.976481 containerd[1513]: time="2025-09-04T16:10:17.976367975Z" level=info msg="received exit event container_id:\"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" id:\"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" pid:3294 exited_at:{seconds:1757002217 nanos:968717528}" Sep 4 16:10:17.984579 containerd[1513]: time="2025-09-04T16:10:17.984528043Z" level=info msg="StartContainer for \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" returns successfully" Sep 4 16:10:17.998246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa-rootfs.mount: Deactivated successfully. Sep 4 16:10:18.861618 kubelet[2667]: E0904 16:10:18.861504 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:18.861618 kubelet[2667]: E0904 16:10:18.861541 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:18.863723 containerd[1513]: time="2025-09-04T16:10:18.863686906Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 16:10:18.877349 containerd[1513]: time="2025-09-04T16:10:18.875634625Z" level=info msg="Container 173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:18.876666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080412050.mount: Deactivated successfully. Sep 4 16:10:18.884355 containerd[1513]: time="2025-09-04T16:10:18.884259890Z" level=info msg="CreateContainer within sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\"" Sep 4 16:10:18.886895 containerd[1513]: time="2025-09-04T16:10:18.885743670Z" level=info msg="StartContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\"" Sep 4 16:10:18.886895 containerd[1513]: time="2025-09-04T16:10:18.886586424Z" level=info msg="connecting to shim 173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b" address="unix:///run/containerd/s/40b72c838e83bce6375e733fd88ce4ef2e6c66f73c53e0d00377c5fee833dbff" protocol=ttrpc version=3 Sep 4 16:10:18.891883 kubelet[2667]: I0904 16:10:18.891841 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-t5c5v" podStartSLOduration=2.483815618 podStartE2EDuration="15.891824473s" podCreationTimestamp="2025-09-04 16:10:03 +0000 UTC" firstStartedPulling="2025-09-04 16:10:03.792887904 +0000 UTC m=+5.127220782" lastFinishedPulling="2025-09-04 16:10:17.200896759 +0000 UTC m=+18.535229637" observedRunningTime="2025-09-04 16:10:17.900000791 +0000 UTC m=+19.234333749" watchObservedRunningTime="2025-09-04 16:10:18.891824473 +0000 UTC m=+20.226157351" Sep 4 16:10:18.916397 systemd[1]: Started cri-containerd-173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b.scope - libcontainer container 173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b. Sep 4 16:10:18.942601 containerd[1513]: time="2025-09-04T16:10:18.942560986Z" level=info msg="StartContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" returns successfully" Sep 4 16:10:19.034730 containerd[1513]: time="2025-09-04T16:10:19.034595589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" id:\"581d07fb9b3f8daded02c9f6b65db698f4ead185f646dcf8286d7a8541d28c56\" pid:3364 exited_at:{seconds:1757002219 nanos:34328419}" Sep 4 16:10:19.041412 kubelet[2667]: I0904 16:10:19.041342 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 16:10:19.079308 systemd[1]: Created slice kubepods-burstable-pod58d7b6a7_27a7_436b_b3de_3c0bad0b3f22.slice - libcontainer container kubepods-burstable-pod58d7b6a7_27a7_436b_b3de_3c0bad0b3f22.slice. Sep 4 16:10:19.083649 systemd[1]: Created slice kubepods-burstable-pod8646e853_7d96_4ff0_b7a1_a1d8793d63c8.slice - libcontainer container kubepods-burstable-pod8646e853_7d96_4ff0_b7a1_a1d8793d63c8.slice. Sep 4 16:10:19.186393 kubelet[2667]: I0904 16:10:19.186279 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr4wl\" (UniqueName: \"kubernetes.io/projected/8646e853-7d96-4ff0-b7a1-a1d8793d63c8-kube-api-access-tr4wl\") pod \"coredns-668d6bf9bc-4bk8g\" (UID: \"8646e853-7d96-4ff0-b7a1-a1d8793d63c8\") " pod="kube-system/coredns-668d6bf9bc-4bk8g" Sep 4 16:10:19.186591 kubelet[2667]: I0904 16:10:19.186318 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzvm2\" (UniqueName: \"kubernetes.io/projected/58d7b6a7-27a7-436b-b3de-3c0bad0b3f22-kube-api-access-mzvm2\") pod \"coredns-668d6bf9bc-zvcv5\" (UID: \"58d7b6a7-27a7-436b-b3de-3c0bad0b3f22\") " pod="kube-system/coredns-668d6bf9bc-zvcv5" Sep 4 16:10:19.186591 kubelet[2667]: I0904 16:10:19.186485 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d7b6a7-27a7-436b-b3de-3c0bad0b3f22-config-volume\") pod \"coredns-668d6bf9bc-zvcv5\" (UID: \"58d7b6a7-27a7-436b-b3de-3c0bad0b3f22\") " pod="kube-system/coredns-668d6bf9bc-zvcv5" Sep 4 16:10:19.186704 kubelet[2667]: I0904 16:10:19.186510 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8646e853-7d96-4ff0-b7a1-a1d8793d63c8-config-volume\") pod \"coredns-668d6bf9bc-4bk8g\" (UID: \"8646e853-7d96-4ff0-b7a1-a1d8793d63c8\") " pod="kube-system/coredns-668d6bf9bc-4bk8g" Sep 4 16:10:19.384148 kubelet[2667]: E0904 16:10:19.384106 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:19.384990 containerd[1513]: time="2025-09-04T16:10:19.384957587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvcv5,Uid:58d7b6a7-27a7-436b-b3de-3c0bad0b3f22,Namespace:kube-system,Attempt:0,}" Sep 4 16:10:19.387309 kubelet[2667]: E0904 16:10:19.387224 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:19.387939 containerd[1513]: time="2025-09-04T16:10:19.387597046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bk8g,Uid:8646e853-7d96-4ff0-b7a1-a1d8793d63c8,Namespace:kube-system,Attempt:0,}" Sep 4 16:10:19.867525 kubelet[2667]: E0904 16:10:19.867485 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:20.868746 kubelet[2667]: E0904 16:10:20.868631 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:20.960153 systemd-networkd[1425]: cilium_host: Link UP Sep 4 16:10:20.960303 systemd-networkd[1425]: cilium_net: Link UP Sep 4 16:10:20.960523 systemd-networkd[1425]: cilium_host: Gained carrier Sep 4 16:10:20.960674 systemd-networkd[1425]: cilium_net: Gained carrier Sep 4 16:10:21.032722 systemd-networkd[1425]: cilium_vxlan: Link UP Sep 4 16:10:21.032728 systemd-networkd[1425]: cilium_vxlan: Gained carrier Sep 4 16:10:21.278340 kernel: NET: Registered PF_ALG protocol family Sep 4 16:10:21.395364 systemd-networkd[1425]: cilium_host: Gained IPv6LL Sep 4 16:10:21.821113 systemd-networkd[1425]: lxc_health: Link UP Sep 4 16:10:21.822921 systemd-networkd[1425]: lxc_health: Gained carrier Sep 4 16:10:21.850358 systemd-networkd[1425]: cilium_net: Gained IPv6LL Sep 4 16:10:21.870806 kubelet[2667]: E0904 16:10:21.870783 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:21.927289 kernel: eth0: renamed from tmpb62dc Sep 4 16:10:21.927189 systemd-networkd[1425]: lxc2714ece62082: Link UP Sep 4 16:10:21.929628 systemd-networkd[1425]: lxc2714ece62082: Gained carrier Sep 4 16:10:21.944417 systemd-networkd[1425]: lxc470ab8bb4c35: Link UP Sep 4 16:10:21.957274 kernel: eth0: renamed from tmp189f9 Sep 4 16:10:21.957758 systemd-networkd[1425]: lxc470ab8bb4c35: Gained carrier Sep 4 16:10:22.171393 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Sep 4 16:10:23.003395 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 4 16:10:23.003686 systemd-networkd[1425]: lxc470ab8bb4c35: Gained IPv6LL Sep 4 16:10:23.361086 kubelet[2667]: E0904 16:10:23.360695 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:23.386414 kubelet[2667]: I0904 16:10:23.385933 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-prgr9" podStartSLOduration=8.981367866 podStartE2EDuration="20.385912093s" podCreationTimestamp="2025-09-04 16:10:03 +0000 UTC" firstStartedPulling="2025-09-04 16:10:03.451824294 +0000 UTC m=+4.786157172" lastFinishedPulling="2025-09-04 16:10:14.856368521 +0000 UTC m=+16.190701399" observedRunningTime="2025-09-04 16:10:19.883000371 +0000 UTC m=+21.217333249" watchObservedRunningTime="2025-09-04 16:10:23.385912093 +0000 UTC m=+24.720244971" Sep 4 16:10:23.451405 systemd-networkd[1425]: lxc2714ece62082: Gained IPv6LL Sep 4 16:10:25.379471 containerd[1513]: time="2025-09-04T16:10:25.379410305Z" level=info msg="connecting to shim 189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5" address="unix:///run/containerd/s/f6968c67bdc109ca29d69752528f4fbbcefdd165abd5eef18ff349def876ab89" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:10:25.380652 containerd[1513]: time="2025-09-04T16:10:25.380555814Z" level=info msg="connecting to shim b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15" address="unix:///run/containerd/s/f3dd02faf688355442d3c2a9a96c881ee4e1ccdccdc4080b3b8500e7505756b0" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:10:25.406376 systemd[1]: Started cri-containerd-189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5.scope - libcontainer container 189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5. Sep 4 16:10:25.409432 systemd[1]: Started cri-containerd-b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15.scope - libcontainer container b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15. Sep 4 16:10:25.419084 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:10:25.422282 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:10:25.441889 containerd[1513]: time="2025-09-04T16:10:25.441852777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bk8g,Uid:8646e853-7d96-4ff0-b7a1-a1d8793d63c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5\"" Sep 4 16:10:25.446087 containerd[1513]: time="2025-09-04T16:10:25.445956321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvcv5,Uid:58d7b6a7-27a7-436b-b3de-3c0bad0b3f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15\"" Sep 4 16:10:25.447921 kubelet[2667]: E0904 16:10:25.447896 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:25.448906 kubelet[2667]: E0904 16:10:25.448849 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:25.456991 containerd[1513]: time="2025-09-04T16:10:25.456959522Z" level=info msg="CreateContainer within sandbox \"b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:10:25.457108 containerd[1513]: time="2025-09-04T16:10:25.456961802Z" level=info msg="CreateContainer within sandbox \"189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:10:25.472166 containerd[1513]: time="2025-09-04T16:10:25.472130349Z" level=info msg="Container b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:25.485673 containerd[1513]: time="2025-09-04T16:10:25.485630133Z" level=info msg="CreateContainer within sandbox \"b62dcd7808e5b4e30d5aa47afdd1ceb842dbd9765a5afed55868c68587d4ca15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa\"" Sep 4 16:10:25.486245 containerd[1513]: time="2025-09-04T16:10:25.486213468Z" level=info msg="StartContainer for \"b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa\"" Sep 4 16:10:25.486405 containerd[1513]: time="2025-09-04T16:10:25.486356432Z" level=info msg="Container cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:10:25.487045 containerd[1513]: time="2025-09-04T16:10:25.487020489Z" level=info msg="connecting to shim b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa" address="unix:///run/containerd/s/f3dd02faf688355442d3c2a9a96c881ee4e1ccdccdc4080b3b8500e7505756b0" protocol=ttrpc version=3 Sep 4 16:10:25.492328 containerd[1513]: time="2025-09-04T16:10:25.492297303Z" level=info msg="CreateContainer within sandbox \"189f9e70da4cbe2c5ad63d3146b346a02cbb6ecf004f3adf97184af4dc5772f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad\"" Sep 4 16:10:25.492692 containerd[1513]: time="2025-09-04T16:10:25.492663992Z" level=info msg="StartContainer for \"cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad\"" Sep 4 16:10:25.494619 containerd[1513]: time="2025-09-04T16:10:25.494594562Z" level=info msg="connecting to shim cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad" address="unix:///run/containerd/s/f6968c67bdc109ca29d69752528f4fbbcefdd165abd5eef18ff349def876ab89" protocol=ttrpc version=3 Sep 4 16:10:25.502375 systemd[1]: Started cri-containerd-b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa.scope - libcontainer container b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa. Sep 4 16:10:25.525378 systemd[1]: Started cri-containerd-cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad.scope - libcontainer container cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad. Sep 4 16:10:25.543810 containerd[1513]: time="2025-09-04T16:10:25.543774896Z" level=info msg="StartContainer for \"b8fb14a6b160809dc80a73875b837467d0060a99413e5b3391be80bfb6532efa\" returns successfully" Sep 4 16:10:25.562195 containerd[1513]: time="2025-09-04T16:10:25.562162085Z" level=info msg="StartContainer for \"cb3d3f299fb7243280a75b95a59df42e5efc80104bb244b32310c7bf639810ad\" returns successfully" Sep 4 16:10:25.896754 kubelet[2667]: E0904 16:10:25.896665 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:25.897342 kubelet[2667]: E0904 16:10:25.896891 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:25.911643 kubelet[2667]: I0904 16:10:25.911589 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4bk8g" podStartSLOduration=22.911576954 podStartE2EDuration="22.911576954s" podCreationTimestamp="2025-09-04 16:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:10:25.910938098 +0000 UTC m=+27.245271016" watchObservedRunningTime="2025-09-04 16:10:25.911576954 +0000 UTC m=+27.245909832" Sep 4 16:10:25.922719 kubelet[2667]: I0904 16:10:25.922281 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zvcv5" podStartSLOduration=22.922268707 podStartE2EDuration="22.922268707s" podCreationTimestamp="2025-09-04 16:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:10:25.921820615 +0000 UTC m=+27.256153493" watchObservedRunningTime="2025-09-04 16:10:25.922268707 +0000 UTC m=+27.256601585" Sep 4 16:10:26.316419 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:45728.service - OpenSSH per-connection server daemon (10.0.0.1:45728). Sep 4 16:10:26.386130 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 45728 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:26.387251 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:26.391166 systemd-logind[1497]: New session 8 of user core. Sep 4 16:10:26.401389 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 16:10:26.521352 sshd[4013]: Connection closed by 10.0.0.1 port 45728 Sep 4 16:10:26.521705 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:26.525313 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:45728.service: Deactivated successfully. Sep 4 16:10:26.527311 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 16:10:26.528060 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Sep 4 16:10:26.529000 systemd-logind[1497]: Removed session 8. Sep 4 16:10:31.536311 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:38564.service - OpenSSH per-connection server daemon (10.0.0.1:38564). Sep 4 16:10:31.593623 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 38564 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:31.595524 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:31.601884 systemd-logind[1497]: New session 9 of user core. Sep 4 16:10:31.613367 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 16:10:31.732666 sshd[4039]: Connection closed by 10.0.0.1 port 38564 Sep 4 16:10:31.732974 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:31.737620 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:38564.service: Deactivated successfully. Sep 4 16:10:31.740807 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 16:10:31.742106 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Sep 4 16:10:31.743050 systemd-logind[1497]: Removed session 9. Sep 4 16:10:32.030267 kubelet[2667]: I0904 16:10:32.030179 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:10:32.030605 kubelet[2667]: E0904 16:10:32.030582 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:32.906899 kubelet[2667]: E0904 16:10:32.906864 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:35.892767 kubelet[2667]: E0904 16:10:35.892720 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:35.893112 kubelet[2667]: E0904 16:10:35.892795 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:35.913575 kubelet[2667]: E0904 16:10:35.913490 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:35.916317 kubelet[2667]: E0904 16:10:35.916251 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:10:36.749623 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:38576.service - OpenSSH per-connection server daemon (10.0.0.1:38576). Sep 4 16:10:36.804615 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 38576 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:36.805772 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:36.809481 systemd-logind[1497]: New session 10 of user core. Sep 4 16:10:36.826542 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 16:10:36.944197 sshd[4064]: Connection closed by 10.0.0.1 port 38576 Sep 4 16:10:36.945757 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:36.948640 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:38576.service: Deactivated successfully. Sep 4 16:10:36.950154 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 16:10:36.950811 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Sep 4 16:10:36.954741 systemd-logind[1497]: Removed session 10. Sep 4 16:10:41.958198 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Sep 4 16:10:42.012814 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:42.013857 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:42.018918 systemd-logind[1497]: New session 11 of user core. Sep 4 16:10:42.033418 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 16:10:42.161611 sshd[4082]: Connection closed by 10.0.0.1 port 44260 Sep 4 16:10:42.162213 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:42.174421 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:44260.service: Deactivated successfully. Sep 4 16:10:42.177059 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 16:10:42.177874 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Sep 4 16:10:42.180516 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). Sep 4 16:10:42.180994 systemd-logind[1497]: Removed session 11. Sep 4 16:10:42.242931 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:42.244062 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:42.248262 systemd-logind[1497]: New session 12 of user core. Sep 4 16:10:42.267371 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 16:10:42.419640 sshd[4099]: Connection closed by 10.0.0.1 port 44274 Sep 4 16:10:42.420409 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:42.432968 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:44274.service: Deactivated successfully. Sep 4 16:10:42.440063 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 16:10:42.443380 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Sep 4 16:10:42.444827 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:44280.service - OpenSSH per-connection server daemon (10.0.0.1:44280). Sep 4 16:10:42.446174 systemd-logind[1497]: Removed session 12. Sep 4 16:10:42.500604 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 44280 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:42.501731 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:42.506300 systemd-logind[1497]: New session 13 of user core. Sep 4 16:10:42.516384 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 16:10:42.631608 sshd[4114]: Connection closed by 10.0.0.1 port 44280 Sep 4 16:10:42.632013 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:42.636644 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:44280.service: Deactivated successfully. Sep 4 16:10:42.638194 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 16:10:42.638892 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Sep 4 16:10:42.639844 systemd-logind[1497]: Removed session 13. Sep 4 16:10:47.642261 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). Sep 4 16:10:47.700874 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:47.702141 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:47.706054 systemd-logind[1497]: New session 14 of user core. Sep 4 16:10:47.721357 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 16:10:47.829609 sshd[4130]: Connection closed by 10.0.0.1 port 44284 Sep 4 16:10:47.828520 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:47.831813 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:44284.service: Deactivated successfully. Sep 4 16:10:47.833363 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 16:10:47.834043 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Sep 4 16:10:47.834942 systemd-logind[1497]: Removed session 14. Sep 4 16:10:52.843452 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:54044.service - OpenSSH per-connection server daemon (10.0.0.1:54044). Sep 4 16:10:52.889809 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 54044 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:52.890871 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:52.894145 systemd-logind[1497]: New session 15 of user core. Sep 4 16:10:52.901388 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 16:10:53.010644 sshd[4147]: Connection closed by 10.0.0.1 port 54044 Sep 4 16:10:53.010023 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:53.028152 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:54044.service: Deactivated successfully. Sep 4 16:10:53.029651 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 16:10:53.031382 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Sep 4 16:10:53.033803 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:54058.service - OpenSSH per-connection server daemon (10.0.0.1:54058). Sep 4 16:10:53.034300 systemd-logind[1497]: Removed session 15. Sep 4 16:10:53.098558 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 54058 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:53.099540 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:53.102899 systemd-logind[1497]: New session 16 of user core. Sep 4 16:10:53.109366 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 16:10:53.276480 sshd[4163]: Connection closed by 10.0.0.1 port 54058 Sep 4 16:10:53.277123 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:53.289141 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:54058.service: Deactivated successfully. Sep 4 16:10:53.290689 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 16:10:53.291445 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Sep 4 16:10:53.293645 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:54064.service - OpenSSH per-connection server daemon (10.0.0.1:54064). Sep 4 16:10:53.295556 systemd-logind[1497]: Removed session 16. Sep 4 16:10:53.357934 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 54064 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:53.358971 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:53.363103 systemd-logind[1497]: New session 17 of user core. Sep 4 16:10:53.369357 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 16:10:53.925886 sshd[4177]: Connection closed by 10.0.0.1 port 54064 Sep 4 16:10:53.926434 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:53.938541 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:54064.service: Deactivated successfully. Sep 4 16:10:53.941804 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 16:10:53.943965 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Sep 4 16:10:53.947442 systemd-logind[1497]: Removed session 17. Sep 4 16:10:53.948830 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:54078.service - OpenSSH per-connection server daemon (10.0.0.1:54078). Sep 4 16:10:54.005493 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 54078 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:54.006570 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:54.010580 systemd-logind[1497]: New session 18 of user core. Sep 4 16:10:54.017366 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 16:10:54.228279 sshd[4200]: Connection closed by 10.0.0.1 port 54078 Sep 4 16:10:54.227848 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:54.239119 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:54078.service: Deactivated successfully. Sep 4 16:10:54.240781 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 16:10:54.241637 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Sep 4 16:10:54.243758 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:54086.service - OpenSSH per-connection server daemon (10.0.0.1:54086). Sep 4 16:10:54.244787 systemd-logind[1497]: Removed session 18. Sep 4 16:10:54.301925 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 54086 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:54.303081 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:54.306802 systemd-logind[1497]: New session 19 of user core. Sep 4 16:10:54.321458 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 16:10:54.431011 sshd[4215]: Connection closed by 10.0.0.1 port 54086 Sep 4 16:10:54.431348 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:54.435041 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:54086.service: Deactivated successfully. Sep 4 16:10:54.436895 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 16:10:54.437716 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Sep 4 16:10:54.438941 systemd-logind[1497]: Removed session 19. Sep 4 16:10:59.446634 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Sep 4 16:10:59.499618 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:10:59.500682 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:10:59.504105 systemd-logind[1497]: New session 20 of user core. Sep 4 16:10:59.510374 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 16:10:59.618783 sshd[4235]: Connection closed by 10.0.0.1 port 54096 Sep 4 16:10:59.619428 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 4 16:10:59.622922 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:54096.service: Deactivated successfully. Sep 4 16:10:59.624513 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 16:10:59.626909 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Sep 4 16:10:59.628016 systemd-logind[1497]: Removed session 20. Sep 4 16:11:04.633878 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:41014.service - OpenSSH per-connection server daemon (10.0.0.1:41014). Sep 4 16:11:04.699801 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 41014 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:04.700839 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:04.704288 systemd-logind[1497]: New session 21 of user core. Sep 4 16:11:04.718414 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 16:11:04.821454 sshd[4253]: Connection closed by 10.0.0.1 port 41014 Sep 4 16:11:04.821650 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:04.825287 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:41014.service: Deactivated successfully. Sep 4 16:11:04.828707 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 16:11:04.829386 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Sep 4 16:11:04.831107 systemd-logind[1497]: Removed session 21. Sep 4 16:11:09.837418 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). Sep 4 16:11:09.883888 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:09.884879 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:09.888288 systemd-logind[1497]: New session 22 of user core. Sep 4 16:11:09.901404 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 16:11:10.007211 sshd[4270]: Connection closed by 10.0.0.1 port 41028 Sep 4 16:11:10.007514 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:10.010769 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:41028.service: Deactivated successfully. Sep 4 16:11:10.012329 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 16:11:10.012944 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Sep 4 16:11:10.013826 systemd-logind[1497]: Removed session 22. Sep 4 16:11:14.746873 kubelet[2667]: E0904 16:11:14.746831 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:15.030369 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:55370.service - OpenSSH per-connection server daemon (10.0.0.1:55370). Sep 4 16:11:15.101802 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 55370 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:15.103552 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:15.108619 systemd-logind[1497]: New session 23 of user core. Sep 4 16:11:15.120450 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 16:11:15.228973 sshd[4287]: Connection closed by 10.0.0.1 port 55370 Sep 4 16:11:15.229523 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:15.241385 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:55370.service: Deactivated successfully. Sep 4 16:11:15.243022 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 16:11:15.243828 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Sep 4 16:11:15.246348 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:55384.service - OpenSSH per-connection server daemon (10.0.0.1:55384). Sep 4 16:11:15.247209 systemd-logind[1497]: Removed session 23. Sep 4 16:11:15.312472 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 55384 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:15.313848 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:15.321449 systemd-logind[1497]: New session 24 of user core. Sep 4 16:11:15.335414 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 16:11:17.525300 containerd[1513]: time="2025-09-04T16:11:17.525249835Z" level=info msg="StopContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" with timeout 30 (s)" Sep 4 16:11:17.526754 containerd[1513]: time="2025-09-04T16:11:17.526729723Z" level=info msg="Stop container \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" with signal terminated" Sep 4 16:11:17.539370 systemd[1]: cri-containerd-935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb.scope: Deactivated successfully. Sep 4 16:11:17.541571 containerd[1513]: time="2025-09-04T16:11:17.541434281Z" level=info msg="received exit event container_id:\"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" id:\"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" pid:3258 exited_at:{seconds:1757002277 nanos:541167840}" Sep 4 16:11:17.541720 containerd[1513]: time="2025-09-04T16:11:17.541683282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" id:\"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" pid:3258 exited_at:{seconds:1757002277 nanos:541167840}" Sep 4 16:11:17.554946 containerd[1513]: time="2025-09-04T16:11:17.554888113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" id:\"9a5299ce8c46161293e28253394acc32361ace7f03fef0fb76c7dab77b1394bf\" pid:4329 exited_at:{seconds:1757002277 nanos:554616351}" Sep 4 16:11:17.555793 containerd[1513]: time="2025-09-04T16:11:17.555747597Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:11:17.556657 containerd[1513]: time="2025-09-04T16:11:17.556631682Z" level=info msg="StopContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" with timeout 2 (s)" Sep 4 16:11:17.558926 containerd[1513]: time="2025-09-04T16:11:17.558900334Z" level=info msg="Stop container \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" with signal terminated" Sep 4 16:11:17.564565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb-rootfs.mount: Deactivated successfully. Sep 4 16:11:17.566965 systemd-networkd[1425]: lxc_health: Link DOWN Sep 4 16:11:17.566974 systemd-networkd[1425]: lxc_health: Lost carrier Sep 4 16:11:17.576794 containerd[1513]: time="2025-09-04T16:11:17.576755149Z" level=info msg="StopContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" returns successfully" Sep 4 16:11:17.579201 containerd[1513]: time="2025-09-04T16:11:17.579161962Z" level=info msg="StopPodSandbox for \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\"" Sep 4 16:11:17.579297 containerd[1513]: time="2025-09-04T16:11:17.579260122Z" level=info msg="Container to stop \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.580682 systemd[1]: cri-containerd-173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b.scope: Deactivated successfully. Sep 4 16:11:17.580968 systemd[1]: cri-containerd-173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b.scope: Consumed 5.998s CPU time, 123.1M memory peak, 136K read from disk, 12.9M written to disk. Sep 4 16:11:17.584504 containerd[1513]: time="2025-09-04T16:11:17.583903787Z" level=info msg="received exit event container_id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" pid:3332 exited_at:{seconds:1757002277 nanos:583664946}" Sep 4 16:11:17.584504 containerd[1513]: time="2025-09-04T16:11:17.584159708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" id:\"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" pid:3332 exited_at:{seconds:1757002277 nanos:583664946}" Sep 4 16:11:17.587276 systemd[1]: cri-containerd-5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda.scope: Deactivated successfully. Sep 4 16:11:17.594753 containerd[1513]: time="2025-09-04T16:11:17.594716565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" id:\"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" pid:2948 exit_status:137 exited_at:{seconds:1757002277 nanos:594493803}" Sep 4 16:11:17.608664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b-rootfs.mount: Deactivated successfully. Sep 4 16:11:17.616988 containerd[1513]: time="2025-09-04T16:11:17.616780962Z" level=info msg="StopContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" returns successfully" Sep 4 16:11:17.618125 containerd[1513]: time="2025-09-04T16:11:17.618090849Z" level=info msg="StopPodSandbox for \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\"" Sep 4 16:11:17.618254 containerd[1513]: time="2025-09-04T16:11:17.618160009Z" level=info msg="Container to stop \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.618254 containerd[1513]: time="2025-09-04T16:11:17.618200610Z" level=info msg="Container to stop \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.618254 containerd[1513]: time="2025-09-04T16:11:17.618211130Z" level=info msg="Container to stop \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.618254 containerd[1513]: time="2025-09-04T16:11:17.618219970Z" level=info msg="Container to stop \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.618254 containerd[1513]: time="2025-09-04T16:11:17.618252890Z" level=info msg="Container to stop \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:11:17.622558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda-rootfs.mount: Deactivated successfully. Sep 4 16:11:17.626286 containerd[1513]: time="2025-09-04T16:11:17.624743284Z" level=info msg="TearDown network for sandbox \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" successfully" Sep 4 16:11:17.626286 containerd[1513]: time="2025-09-04T16:11:17.624784405Z" level=info msg="StopPodSandbox for \"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" returns successfully" Sep 4 16:11:17.625947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda-shm.mount: Deactivated successfully. Sep 4 16:11:17.627567 containerd[1513]: time="2025-09-04T16:11:17.627532059Z" level=info msg="shim disconnected" id=5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda namespace=k8s.io Sep 4 16:11:17.627634 containerd[1513]: time="2025-09-04T16:11:17.627561379Z" level=warning msg="cleaning up after shim disconnected" id=5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda namespace=k8s.io Sep 4 16:11:17.627634 containerd[1513]: time="2025-09-04T16:11:17.627590860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 16:11:17.627994 systemd[1]: cri-containerd-076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32.scope: Deactivated successfully. Sep 4 16:11:17.629294 containerd[1513]: time="2025-09-04T16:11:17.628934547Z" level=info msg="received exit event sandbox_id:\"5790ef1afebba6361a6654d2de2e3008fec170acf195f72dd651e331f3545dda\" exit_status:137 exited_at:{seconds:1757002277 nanos:594493803}" Sep 4 16:11:17.653104 containerd[1513]: time="2025-09-04T16:11:17.653065475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" id:\"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" pid:2820 exit_status:137 exited_at:{seconds:1757002277 nanos:627752660}" Sep 4 16:11:17.655689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32-rootfs.mount: Deactivated successfully. Sep 4 16:11:17.660483 containerd[1513]: time="2025-09-04T16:11:17.660443434Z" level=info msg="received exit event sandbox_id:\"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" exit_status:137 exited_at:{seconds:1757002277 nanos:627752660}" Sep 4 16:11:17.660703 containerd[1513]: time="2025-09-04T16:11:17.660674796Z" level=info msg="shim disconnected" id=076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32 namespace=k8s.io Sep 4 16:11:17.660805 containerd[1513]: time="2025-09-04T16:11:17.660768956Z" level=warning msg="cleaning up after shim disconnected" id=076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32 namespace=k8s.io Sep 4 16:11:17.660980 containerd[1513]: time="2025-09-04T16:11:17.660962117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 16:11:17.661697 containerd[1513]: time="2025-09-04T16:11:17.660704196Z" level=info msg="TearDown network for sandbox \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" successfully" Sep 4 16:11:17.661697 containerd[1513]: time="2025-09-04T16:11:17.661125358Z" level=info msg="StopPodSandbox for \"076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32\" returns successfully" Sep 4 16:11:17.723935 kubelet[2667]: I0904 16:11:17.723888 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef442874-a7ef-43ac-965f-6746506b46fc-cilium-config-path\") pod \"ef442874-a7ef-43ac-965f-6746506b46fc\" (UID: \"ef442874-a7ef-43ac-965f-6746506b46fc\") " Sep 4 16:11:17.723935 kubelet[2667]: I0904 16:11:17.723934 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq6zt\" (UniqueName: \"kubernetes.io/projected/ef442874-a7ef-43ac-965f-6746506b46fc-kube-api-access-jq6zt\") pod \"ef442874-a7ef-43ac-965f-6746506b46fc\" (UID: \"ef442874-a7ef-43ac-965f-6746506b46fc\") " Sep 4 16:11:17.731960 kubelet[2667]: I0904 16:11:17.731890 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef442874-a7ef-43ac-965f-6746506b46fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef442874-a7ef-43ac-965f-6746506b46fc" (UID: "ef442874-a7ef-43ac-965f-6746506b46fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 16:11:17.734204 kubelet[2667]: I0904 16:11:17.734158 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef442874-a7ef-43ac-965f-6746506b46fc-kube-api-access-jq6zt" (OuterVolumeSpecName: "kube-api-access-jq6zt") pod "ef442874-a7ef-43ac-965f-6746506b46fc" (UID: "ef442874-a7ef-43ac-965f-6746506b46fc"). InnerVolumeSpecName "kube-api-access-jq6zt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825596 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hubble-tls\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825640 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-clustermesh-secrets\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825661 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-cgroup\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825677 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5844\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-kube-api-access-s5844\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825693 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-etc-cni-netd\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825731 kubelet[2667]: I0904 16:11:17.825707 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-bpf-maps\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825725 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-run\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825746 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-config-path\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825763 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-lib-modules\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825777 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-net\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825792 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-kernel\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.825952 kubelet[2667]: I0904 16:11:17.825805 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hostproc\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.826070 kubelet[2667]: I0904 16:11:17.825819 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-xtables-lock\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.826070 kubelet[2667]: I0904 16:11:17.825835 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cni-path\") pod \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\" (UID: \"2010a036-eac1-45ed-a4f5-e949ffe4d1d4\") " Sep 4 16:11:17.826070 kubelet[2667]: I0904 16:11:17.825872 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef442874-a7ef-43ac-965f-6746506b46fc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.826070 kubelet[2667]: I0904 16:11:17.825882 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq6zt\" (UniqueName: \"kubernetes.io/projected/ef442874-a7ef-43ac-965f-6746506b46fc-kube-api-access-jq6zt\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.826070 kubelet[2667]: I0904 16:11:17.825933 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.827249 kubelet[2667]: I0904 16:11:17.826298 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.827249 kubelet[2667]: I0904 16:11:17.826369 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828729 kubelet[2667]: I0904 16:11:17.828686 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 16:11:17.828808 kubelet[2667]: I0904 16:11:17.828740 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828808 kubelet[2667]: I0904 16:11:17.828756 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828808 kubelet[2667]: I0904 16:11:17.828769 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828808 kubelet[2667]: I0904 16:11:17.828784 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828808 kubelet[2667]: I0904 16:11:17.828797 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828915 kubelet[2667]: I0904 16:11:17.828813 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828915 kubelet[2667]: I0904 16:11:17.828826 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:11:17.828956 kubelet[2667]: I0904 16:11:17.828931 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-kube-api-access-s5844" (OuterVolumeSpecName: "kube-api-access-s5844") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "kube-api-access-s5844". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:11:17.829010 kubelet[2667]: I0904 16:11:17.828975 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 16:11:17.829776 kubelet[2667]: I0904 16:11:17.829746 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2010a036-eac1-45ed-a4f5-e949ffe4d1d4" (UID: "2010a036-eac1-45ed-a4f5-e949ffe4d1d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:11:17.926120 kubelet[2667]: I0904 16:11:17.926058 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926120 kubelet[2667]: I0904 16:11:17.926104 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5844\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-kube-api-access-s5844\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926120 kubelet[2667]: I0904 16:11:17.926127 2667 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926158 2667 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926175 2667 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926184 2667 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926192 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926200 2667 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926207 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926216 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926332 kubelet[2667]: I0904 16:11:17.926224 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926493 kubelet[2667]: I0904 16:11:17.926245 2667 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926493 kubelet[2667]: I0904 16:11:17.926253 2667 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:17.926493 kubelet[2667]: I0904 16:11:17.926260 2667 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2010a036-eac1-45ed-a4f5-e949ffe4d1d4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:11:18.005490 kubelet[2667]: I0904 16:11:18.005155 2667 scope.go:117] "RemoveContainer" containerID="935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb" Sep 4 16:11:18.008553 containerd[1513]: time="2025-09-04T16:11:18.008515606Z" level=info msg="RemoveContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\"" Sep 4 16:11:18.011169 systemd[1]: Removed slice kubepods-besteffort-podef442874_a7ef_43ac_965f_6746506b46fc.slice - libcontainer container kubepods-besteffort-podef442874_a7ef_43ac_965f_6746506b46fc.slice. Sep 4 16:11:18.018174 containerd[1513]: time="2025-09-04T16:11:18.018122976Z" level=info msg="RemoveContainer for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" returns successfully" Sep 4 16:11:18.020397 kubelet[2667]: I0904 16:11:18.018794 2667 scope.go:117] "RemoveContainer" containerID="935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb" Sep 4 16:11:18.020397 kubelet[2667]: E0904 16:11:18.019983 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\": not found" containerID="935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb" Sep 4 16:11:18.020493 containerd[1513]: time="2025-09-04T16:11:18.019573144Z" level=error msg="ContainerStatus for \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\": not found" Sep 4 16:11:18.021829 systemd[1]: Removed slice kubepods-burstable-pod2010a036_eac1_45ed_a4f5_e949ffe4d1d4.slice - libcontainer container kubepods-burstable-pod2010a036_eac1_45ed_a4f5_e949ffe4d1d4.slice. Sep 4 16:11:18.021926 systemd[1]: kubepods-burstable-pod2010a036_eac1_45ed_a4f5_e949ffe4d1d4.slice: Consumed 6.082s CPU time, 123.4M memory peak, 140K read from disk, 12.9M written to disk. Sep 4 16:11:18.029789 kubelet[2667]: I0904 16:11:18.029679 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb"} err="failed to get container status \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"935227f3ee9f2c6cc8c14903af2d341c92873f4b7aa6eb8f1ef6d8793ee8f0fb\": not found" Sep 4 16:11:18.029789 kubelet[2667]: I0904 16:11:18.029787 2667 scope.go:117] "RemoveContainer" containerID="173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b" Sep 4 16:11:18.044298 containerd[1513]: time="2025-09-04T16:11:18.043784550Z" level=info msg="RemoveContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\"" Sep 4 16:11:18.047876 containerd[1513]: time="2025-09-04T16:11:18.047841211Z" level=info msg="RemoveContainer for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" returns successfully" Sep 4 16:11:18.048112 kubelet[2667]: I0904 16:11:18.048019 2667 scope.go:117] "RemoveContainer" containerID="eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa" Sep 4 16:11:18.049502 containerd[1513]: time="2025-09-04T16:11:18.049472340Z" level=info msg="RemoveContainer for \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\"" Sep 4 16:11:18.052993 containerd[1513]: time="2025-09-04T16:11:18.052961118Z" level=info msg="RemoveContainer for \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" returns successfully" Sep 4 16:11:18.053367 kubelet[2667]: I0904 16:11:18.053332 2667 scope.go:117] "RemoveContainer" containerID="25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca" Sep 4 16:11:18.056072 containerd[1513]: time="2025-09-04T16:11:18.055366610Z" level=info msg="RemoveContainer for \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\"" Sep 4 16:11:18.060293 containerd[1513]: time="2025-09-04T16:11:18.060260796Z" level=info msg="RemoveContainer for \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" returns successfully" Sep 4 16:11:18.060550 kubelet[2667]: I0904 16:11:18.060518 2667 scope.go:117] "RemoveContainer" containerID="ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c" Sep 4 16:11:18.062030 containerd[1513]: time="2025-09-04T16:11:18.061992765Z" level=info msg="RemoveContainer for \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\"" Sep 4 16:11:18.064821 containerd[1513]: time="2025-09-04T16:11:18.064790819Z" level=info msg="RemoveContainer for \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" returns successfully" Sep 4 16:11:18.064938 kubelet[2667]: I0904 16:11:18.064918 2667 scope.go:117] "RemoveContainer" containerID="cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a" Sep 4 16:11:18.066478 containerd[1513]: time="2025-09-04T16:11:18.066041586Z" level=info msg="RemoveContainer for \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\"" Sep 4 16:11:18.068429 containerd[1513]: time="2025-09-04T16:11:18.068401598Z" level=info msg="RemoveContainer for \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" returns successfully" Sep 4 16:11:18.068657 kubelet[2667]: I0904 16:11:18.068634 2667 scope.go:117] "RemoveContainer" containerID="173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b" Sep 4 16:11:18.068819 containerd[1513]: time="2025-09-04T16:11:18.068787920Z" level=error msg="ContainerStatus for \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\": not found" Sep 4 16:11:18.068940 kubelet[2667]: E0904 16:11:18.068921 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\": not found" containerID="173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b" Sep 4 16:11:18.068972 kubelet[2667]: I0904 16:11:18.068948 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b"} err="failed to get container status \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\": rpc error: code = NotFound desc = an error occurred when try to find container \"173c009ffe917f218d9372cfbd3f972a36760a14a4f282ca40bceb9c61bf825b\": not found" Sep 4 16:11:18.068972 kubelet[2667]: I0904 16:11:18.068969 2667 scope.go:117] "RemoveContainer" containerID="eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa" Sep 4 16:11:18.069130 containerd[1513]: time="2025-09-04T16:11:18.069089362Z" level=error msg="ContainerStatus for \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\": not found" Sep 4 16:11:18.069219 kubelet[2667]: E0904 16:11:18.069200 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\": not found" containerID="eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa" Sep 4 16:11:18.069273 kubelet[2667]: I0904 16:11:18.069222 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa"} err="failed to get container status \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb840a68cfb943d5fe07f72bf4a0b5f1bdd6177071ba8486e8cfb3f30e5667aa\": not found" Sep 4 16:11:18.069273 kubelet[2667]: I0904 16:11:18.069271 2667 scope.go:117] "RemoveContainer" containerID="25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca" Sep 4 16:11:18.069464 containerd[1513]: time="2025-09-04T16:11:18.069436684Z" level=error msg="ContainerStatus for \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\": not found" Sep 4 16:11:18.069563 kubelet[2667]: E0904 16:11:18.069547 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\": not found" containerID="25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca" Sep 4 16:11:18.069588 kubelet[2667]: I0904 16:11:18.069569 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca"} err="failed to get container status \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"25db358bb527d2fae361ef30210bbf8ccaeea66e1527f69d47ddf198bfd3a5ca\": not found" Sep 4 16:11:18.069615 kubelet[2667]: I0904 16:11:18.069587 2667 scope.go:117] "RemoveContainer" containerID="ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c" Sep 4 16:11:18.069824 containerd[1513]: time="2025-09-04T16:11:18.069798325Z" level=error msg="ContainerStatus for \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\": not found" Sep 4 16:11:18.069950 kubelet[2667]: E0904 16:11:18.069932 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\": not found" containerID="ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c" Sep 4 16:11:18.069976 kubelet[2667]: I0904 16:11:18.069958 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c"} err="failed to get container status \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ede9e560ef4d94758738fd1bfa23584d5f0dfd3b4540e44ecdd3e000fba0359c\": not found" Sep 4 16:11:18.070002 kubelet[2667]: I0904 16:11:18.069974 2667 scope.go:117] "RemoveContainer" containerID="cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a" Sep 4 16:11:18.070257 containerd[1513]: time="2025-09-04T16:11:18.070220928Z" level=error msg="ContainerStatus for \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\": not found" Sep 4 16:11:18.070368 kubelet[2667]: E0904 16:11:18.070349 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\": not found" containerID="cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a" Sep 4 16:11:18.070410 kubelet[2667]: I0904 16:11:18.070374 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a"} err="failed to get container status \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb0fb304070389bc41a813427974049b886d866e310a9053656f173734c2cf2a\": not found" Sep 4 16:11:18.563872 systemd[1]: var-lib-kubelet-pods-ef442874\x2da7ef\x2d43ac\x2d965f\x2d6746506b46fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djq6zt.mount: Deactivated successfully. Sep 4 16:11:18.563967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-076985bfad53e49d453c8aa8d09bdceacab34704b4aa99bfbb50813fdee6ec32-shm.mount: Deactivated successfully. Sep 4 16:11:18.564019 systemd[1]: var-lib-kubelet-pods-2010a036\x2deac1\x2d45ed\x2da4f5\x2de949ffe4d1d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5844.mount: Deactivated successfully. Sep 4 16:11:18.564079 systemd[1]: var-lib-kubelet-pods-2010a036\x2deac1\x2d45ed\x2da4f5\x2de949ffe4d1d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 16:11:18.564126 systemd[1]: var-lib-kubelet-pods-2010a036\x2deac1\x2d45ed\x2da4f5\x2de949ffe4d1d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 16:11:18.746102 kubelet[2667]: I0904 16:11:18.746016 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2010a036-eac1-45ed-a4f5-e949ffe4d1d4" path="/var/lib/kubelet/pods/2010a036-eac1-45ed-a4f5-e949ffe4d1d4/volumes" Sep 4 16:11:18.746985 kubelet[2667]: I0904 16:11:18.746960 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef442874-a7ef-43ac-965f-6746506b46fc" path="/var/lib/kubelet/pods/ef442874-a7ef-43ac-965f-6746506b46fc/volumes" Sep 4 16:11:18.798730 kubelet[2667]: E0904 16:11:18.798700 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 16:11:19.489374 sshd[4303]: Connection closed by 10.0.0.1 port 55384 Sep 4 16:11:19.489917 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:19.497263 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:55384.service: Deactivated successfully. Sep 4 16:11:19.499829 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 16:11:19.500033 systemd[1]: session-24.scope: Consumed 1.518s CPU time, 25.5M memory peak. Sep 4 16:11:19.500582 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Sep 4 16:11:19.502755 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:55396.service - OpenSSH per-connection server daemon (10.0.0.1:55396). Sep 4 16:11:19.503890 systemd-logind[1497]: Removed session 24. Sep 4 16:11:19.563917 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 55396 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:19.564998 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:19.569289 systemd-logind[1497]: New session 25 of user core. Sep 4 16:11:19.575369 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 16:11:20.451865 kubelet[2667]: I0904 16:11:20.451813 2667 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T16:11:20Z","lastTransitionTime":"2025-09-04T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 16:11:21.135995 sshd[4454]: Connection closed by 10.0.0.1 port 55396 Sep 4 16:11:21.136356 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:21.149296 kubelet[2667]: I0904 16:11:21.148811 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef442874-a7ef-43ac-965f-6746506b46fc" containerName="cilium-operator" Sep 4 16:11:21.149296 kubelet[2667]: I0904 16:11:21.148851 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="2010a036-eac1-45ed-a4f5-e949ffe4d1d4" containerName="cilium-agent" Sep 4 16:11:21.150222 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:55396.service: Deactivated successfully. Sep 4 16:11:21.154315 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 16:11:21.154657 systemd[1]: session-25.scope: Consumed 1.478s CPU time, 26.1M memory peak. Sep 4 16:11:21.160422 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Sep 4 16:11:21.165195 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:48480.service - OpenSSH per-connection server daemon (10.0.0.1:48480). Sep 4 16:11:21.167560 systemd-logind[1497]: Removed session 25. Sep 4 16:11:21.173080 systemd[1]: Created slice kubepods-burstable-pod6f642683_4c65_40c2_a112_13748ef357ac.slice - libcontainer container kubepods-burstable-pod6f642683_4c65_40c2_a112_13748ef357ac.slice. Sep 4 16:11:21.230747 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 48480 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:21.231897 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:21.235595 systemd-logind[1497]: New session 26 of user core. Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244396 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-cni-path\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244435 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f642683-4c65-40c2-a112-13748ef357ac-cilium-config-path\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244458 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-cilium-run\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244472 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-cilium-cgroup\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244491 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdls7\" (UniqueName: \"kubernetes.io/projected/6f642683-4c65-40c2-a112-13748ef357ac-kube-api-access-pdls7\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244692 kubelet[2667]: I0904 16:11:21.244506 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f642683-4c65-40c2-a112-13748ef357ac-hubble-tls\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244869 kubelet[2667]: I0904 16:11:21.244521 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-etc-cni-netd\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244869 kubelet[2667]: I0904 16:11:21.244538 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6f642683-4c65-40c2-a112-13748ef357ac-cilium-ipsec-secrets\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244869 kubelet[2667]: I0904 16:11:21.244553 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-host-proc-sys-net\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244869 kubelet[2667]: I0904 16:11:21.244568 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-xtables-lock\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244869 kubelet[2667]: I0904 16:11:21.244583 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f642683-4c65-40c2-a112-13748ef357ac-clustermesh-secrets\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244965 kubelet[2667]: I0904 16:11:21.244599 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-host-proc-sys-kernel\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244965 kubelet[2667]: I0904 16:11:21.244615 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-hostproc\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244965 kubelet[2667]: I0904 16:11:21.244629 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-bpf-maps\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.244965 kubelet[2667]: I0904 16:11:21.244647 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f642683-4c65-40c2-a112-13748ef357ac-lib-modules\") pod \"cilium-qskl4\" (UID: \"6f642683-4c65-40c2-a112-13748ef357ac\") " pod="kube-system/cilium-qskl4" Sep 4 16:11:21.252382 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 16:11:21.301471 sshd[4469]: Connection closed by 10.0.0.1 port 48480 Sep 4 16:11:21.302773 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:21.317370 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:48480.service: Deactivated successfully. Sep 4 16:11:21.319003 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 16:11:21.319695 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. Sep 4 16:11:21.322030 systemd[1]: Started sshd@26-10.0.0.140:22-10.0.0.1:48496.service - OpenSSH per-connection server daemon (10.0.0.1:48496). Sep 4 16:11:21.322763 systemd-logind[1497]: Removed session 26. Sep 4 16:11:21.392282 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 48496 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 16:11:21.393524 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:11:21.398075 systemd-logind[1497]: New session 27 of user core. Sep 4 16:11:21.408412 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 16:11:21.484741 kubelet[2667]: E0904 16:11:21.484703 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:21.485336 containerd[1513]: time="2025-09-04T16:11:21.485293066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qskl4,Uid:6f642683-4c65-40c2-a112-13748ef357ac,Namespace:kube-system,Attempt:0,}" Sep 4 16:11:21.515193 containerd[1513]: time="2025-09-04T16:11:21.515137572Z" level=info msg="connecting to shim 4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:11:21.550426 systemd[1]: Started cri-containerd-4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929.scope - libcontainer container 4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929. Sep 4 16:11:21.574328 containerd[1513]: time="2025-09-04T16:11:21.574281622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qskl4,Uid:6f642683-4c65-40c2-a112-13748ef357ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\"" Sep 4 16:11:21.574941 kubelet[2667]: E0904 16:11:21.574921 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:21.577146 containerd[1513]: time="2025-09-04T16:11:21.577119116Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 16:11:21.583912 containerd[1513]: time="2025-09-04T16:11:21.583874949Z" level=info msg="Container f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:11:21.594248 containerd[1513]: time="2025-09-04T16:11:21.594190240Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\"" Sep 4 16:11:21.595371 containerd[1513]: time="2025-09-04T16:11:21.595345005Z" level=info msg="StartContainer for \"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\"" Sep 4 16:11:21.597191 containerd[1513]: time="2025-09-04T16:11:21.597149614Z" level=info msg="connecting to shim f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" protocol=ttrpc version=3 Sep 4 16:11:21.616402 systemd[1]: Started cri-containerd-f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19.scope - libcontainer container f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19. Sep 4 16:11:21.645445 containerd[1513]: time="2025-09-04T16:11:21.645298330Z" level=info msg="StartContainer for \"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\" returns successfully" Sep 4 16:11:21.653922 systemd[1]: cri-containerd-f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19.scope: Deactivated successfully. Sep 4 16:11:21.657502 containerd[1513]: time="2025-09-04T16:11:21.657461950Z" level=info msg="received exit event container_id:\"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\" id:\"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\" pid:4547 exited_at:{seconds:1757002281 nanos:657200349}" Sep 4 16:11:21.657595 containerd[1513]: time="2025-09-04T16:11:21.657565231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\" id:\"f0ceba6c6661e7f1af9d1badf6172b2235a427d95adb5f745a91b7a49298de19\" pid:4547 exited_at:{seconds:1757002281 nanos:657200349}" Sep 4 16:11:22.022220 kubelet[2667]: E0904 16:11:22.022192 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:22.026406 containerd[1513]: time="2025-09-04T16:11:22.026224636Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 16:11:22.031257 containerd[1513]: time="2025-09-04T16:11:22.031202860Z" level=info msg="Container a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:11:22.036972 containerd[1513]: time="2025-09-04T16:11:22.036882848Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\"" Sep 4 16:11:22.038416 containerd[1513]: time="2025-09-04T16:11:22.038380935Z" level=info msg="StartContainer for \"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\"" Sep 4 16:11:22.039735 containerd[1513]: time="2025-09-04T16:11:22.039704061Z" level=info msg="connecting to shim a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" protocol=ttrpc version=3 Sep 4 16:11:22.062398 systemd[1]: Started cri-containerd-a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e.scope - libcontainer container a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e. Sep 4 16:11:22.085816 containerd[1513]: time="2025-09-04T16:11:22.085785323Z" level=info msg="StartContainer for \"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\" returns successfully" Sep 4 16:11:22.092009 systemd[1]: cri-containerd-a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e.scope: Deactivated successfully. Sep 4 16:11:22.093077 containerd[1513]: time="2025-09-04T16:11:22.093045478Z" level=info msg="received exit event container_id:\"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\" id:\"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\" pid:4596 exited_at:{seconds:1757002282 nanos:92857717}" Sep 4 16:11:22.093167 containerd[1513]: time="2025-09-04T16:11:22.093132598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\" id:\"a6acf67eb23619b4f6784d0043ae8ab45de6c374fe67da0f5286dc13ab56596e\" pid:4596 exited_at:{seconds:1757002282 nanos:92857717}" Sep 4 16:11:23.025445 kubelet[2667]: E0904 16:11:23.025412 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:23.028707 containerd[1513]: time="2025-09-04T16:11:23.028647774Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 16:11:23.040901 containerd[1513]: time="2025-09-04T16:11:23.039693426Z" level=info msg="Container 9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:11:23.041592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143061458.mount: Deactivated successfully. Sep 4 16:11:23.048836 containerd[1513]: time="2025-09-04T16:11:23.048775309Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\"" Sep 4 16:11:23.049787 containerd[1513]: time="2025-09-04T16:11:23.049750913Z" level=info msg="StartContainer for \"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\"" Sep 4 16:11:23.051655 containerd[1513]: time="2025-09-04T16:11:23.051619042Z" level=info msg="connecting to shim 9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" protocol=ttrpc version=3 Sep 4 16:11:23.070390 systemd[1]: Started cri-containerd-9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637.scope - libcontainer container 9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637. Sep 4 16:11:23.102213 systemd[1]: cri-containerd-9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637.scope: Deactivated successfully. Sep 4 16:11:23.103391 containerd[1513]: time="2025-09-04T16:11:23.103361126Z" level=info msg="received exit event container_id:\"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\" id:\"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\" pid:4639 exited_at:{seconds:1757002283 nanos:103060965}" Sep 4 16:11:23.103647 containerd[1513]: time="2025-09-04T16:11:23.103411286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\" id:\"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\" pid:4639 exited_at:{seconds:1757002283 nanos:103060965}" Sep 4 16:11:23.110620 containerd[1513]: time="2025-09-04T16:11:23.110593680Z" level=info msg="StartContainer for \"9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637\" returns successfully" Sep 4 16:11:23.121594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4eb69130d099cb9f5ba03fec181711808d2f5b918065a74538944d5dafc637-rootfs.mount: Deactivated successfully. Sep 4 16:11:23.799925 kubelet[2667]: E0904 16:11:23.799880 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 16:11:24.031497 kubelet[2667]: E0904 16:11:24.031462 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:24.034552 containerd[1513]: time="2025-09-04T16:11:24.034366554Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 16:11:24.043830 containerd[1513]: time="2025-09-04T16:11:24.043269435Z" level=info msg="Container 5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:11:24.046465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032580150.mount: Deactivated successfully. Sep 4 16:11:24.051911 containerd[1513]: time="2025-09-04T16:11:24.051821674Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\"" Sep 4 16:11:24.052389 containerd[1513]: time="2025-09-04T16:11:24.052355037Z" level=info msg="StartContainer for \"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\"" Sep 4 16:11:24.053082 containerd[1513]: time="2025-09-04T16:11:24.053031240Z" level=info msg="connecting to shim 5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" protocol=ttrpc version=3 Sep 4 16:11:24.075377 systemd[1]: Started cri-containerd-5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b.scope - libcontainer container 5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b. Sep 4 16:11:24.096150 systemd[1]: cri-containerd-5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b.scope: Deactivated successfully. Sep 4 16:11:24.098696 containerd[1513]: time="2025-09-04T16:11:24.098655451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\" id:\"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\" pid:4680 exited_at:{seconds:1757002284 nanos:98447010}" Sep 4 16:11:24.098780 containerd[1513]: time="2025-09-04T16:11:24.098720891Z" level=info msg="received exit event container_id:\"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\" id:\"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\" pid:4680 exited_at:{seconds:1757002284 nanos:98447010}" Sep 4 16:11:24.105516 containerd[1513]: time="2025-09-04T16:11:24.105479923Z" level=info msg="StartContainer for \"5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b\" returns successfully" Sep 4 16:11:24.116678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba1d43f26db5d1a4dafee2917c3201671fdd9ed6f9131100e05912b7abbdd1b-rootfs.mount: Deactivated successfully. Sep 4 16:11:24.746182 kubelet[2667]: E0904 16:11:24.746091 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:25.036956 kubelet[2667]: E0904 16:11:25.036664 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:25.039539 containerd[1513]: time="2025-09-04T16:11:25.039498919Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 16:11:25.048261 containerd[1513]: time="2025-09-04T16:11:25.047431515Z" level=info msg="Container 53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:11:25.054609 containerd[1513]: time="2025-09-04T16:11:25.054575067Z" level=info msg="CreateContainer within sandbox \"4174c6c704fc5e15e2b6e3905e4548670c04c5006275681e3cf6e4997b870929\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\"" Sep 4 16:11:25.055087 containerd[1513]: time="2025-09-04T16:11:25.055056790Z" level=info msg="StartContainer for \"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\"" Sep 4 16:11:25.056195 containerd[1513]: time="2025-09-04T16:11:25.056170515Z" level=info msg="connecting to shim 53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d" address="unix:///run/containerd/s/d699015b3b2ec3cc846f3b9a1f91602377923b58eafb342bd54b4433337335cb" protocol=ttrpc version=3 Sep 4 16:11:25.082388 systemd[1]: Started cri-containerd-53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d.scope - libcontainer container 53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d. Sep 4 16:11:25.108377 containerd[1513]: time="2025-09-04T16:11:25.108328111Z" level=info msg="StartContainer for \"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" returns successfully" Sep 4 16:11:25.154406 containerd[1513]: time="2025-09-04T16:11:25.154189319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" id:\"f111c6de6761bd15e9f1088b1cfc9355abea9f70d4597f864eca5f6ffda41754\" pid:4751 exited_at:{seconds:1757002285 nanos:153897238}" Sep 4 16:11:25.373281 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 16:11:25.744239 kubelet[2667]: E0904 16:11:25.744202 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:26.044883 kubelet[2667]: E0904 16:11:26.043911 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:26.060150 kubelet[2667]: I0904 16:11:26.060077 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qskl4" podStartSLOduration=5.060060265 podStartE2EDuration="5.060060265s" podCreationTimestamp="2025-09-04 16:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:11:26.059637503 +0000 UTC m=+87.393970381" watchObservedRunningTime="2025-09-04 16:11:26.060060265 +0000 UTC m=+87.394393143" Sep 4 16:11:27.485724 kubelet[2667]: E0904 16:11:27.485695 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:27.744242 kubelet[2667]: E0904 16:11:27.744136 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:27.767924 containerd[1513]: time="2025-09-04T16:11:27.767823166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" id:\"057e517b5e5cc17243accdf44c9ba891887757c85c477976a9681222e735b468\" pid:5157 exit_status:1 exited_at:{seconds:1757002287 nanos:767540085}" Sep 4 16:11:28.081288 systemd-networkd[1425]: lxc_health: Link UP Sep 4 16:11:28.093588 systemd-networkd[1425]: lxc_health: Gained carrier Sep 4 16:11:29.485917 kubelet[2667]: E0904 16:11:29.485870 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:29.884303 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 4 16:11:29.887042 containerd[1513]: time="2025-09-04T16:11:29.887002689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" id:\"d61e403375276811fad62512c01d90a6d82a0e7a708eb2e3f2eb5980a7cd8404\" pid:5289 exited_at:{seconds:1757002289 nanos:886638968}" Sep 4 16:11:29.889880 kubelet[2667]: E0904 16:11:29.889845 2667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48910->127.0.0.1:39331: write tcp 127.0.0.1:48910->127.0.0.1:39331: write: broken pipe Sep 4 16:11:30.053426 kubelet[2667]: E0904 16:11:30.052878 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:31.054339 kubelet[2667]: E0904 16:11:31.054305 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:11:32.005428 containerd[1513]: time="2025-09-04T16:11:32.005382233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" id:\"7f2b2d5902c9e1aec7439957cf9b085ca814a06bb71c0a555501badc084bf166\" pid:5322 exited_at:{seconds:1757002292 nanos:5004192}" Sep 4 16:11:34.102657 containerd[1513]: time="2025-09-04T16:11:34.102529295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53194ea34e2e0c03ddebe08c22d89f9732dba6fb571c2a4d6ff5b3645fa23e3d\" id:\"9a91181bc64b4eeb04126691cc277079b919bdd010e6d5f79191656e51ce9d17\" pid:5350 exited_at:{seconds:1757002294 nanos:102242414}" Sep 4 16:11:34.107057 sshd[4483]: Connection closed by 10.0.0.1 port 48496 Sep 4 16:11:34.107703 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Sep 4 16:11:34.110812 systemd[1]: sshd@26-10.0.0.140:22-10.0.0.1:48496.service: Deactivated successfully. Sep 4 16:11:34.112767 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 16:11:34.113499 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. Sep 4 16:11:34.115176 systemd-logind[1497]: Removed session 27.