Sep 16 04:30:57.788088 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 16 04:30:57.788108 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:30:57.788117 kernel: KASLR enabled Sep 16 04:30:57.788123 kernel: efi: EFI v2.7 by EDK II Sep 16 04:30:57.788128 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 16 04:30:57.788133 kernel: random: crng init done Sep 16 04:30:57.788140 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 16 04:30:57.788146 kernel: secureboot: Secure boot enabled Sep 16 04:30:57.788152 kernel: ACPI: Early table checksum verification disabled Sep 16 04:30:57.788159 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 16 04:30:57.788165 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:30:57.788170 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788176 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788182 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788189 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788196 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788203 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788209 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788215 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788222 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:30:57.788228 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 16 04:30:57.788234 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:30:57.788240 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:30:57.788246 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 16 04:30:57.788265 kernel: Zone ranges: Sep 16 04:30:57.788272 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:30:57.788279 kernel: DMA32 empty Sep 16 04:30:57.788284 kernel: Normal empty Sep 16 04:30:57.788290 kernel: Device empty Sep 16 04:30:57.788296 kernel: Movable zone start for each node Sep 16 04:30:57.788302 kernel: Early memory node ranges Sep 16 04:30:57.788308 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 16 04:30:57.788315 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 16 04:30:57.788321 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 16 04:30:57.788327 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 16 04:30:57.788333 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 16 04:30:57.788339 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 16 04:30:57.788347 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 16 04:30:57.788353 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 16 04:30:57.788359 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 16 04:30:57.788367 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:30:57.788374 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 16 04:30:57.788380 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 16 04:30:57.788386 kernel: psci: probing for conduit method from ACPI. Sep 16 04:30:57.788394 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:30:57.788400 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:30:57.788407 kernel: psci: Trusted OS migration not required Sep 16 04:30:57.788413 kernel: psci: SMC Calling Convention v1.1 Sep 16 04:30:57.788420 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 16 04:30:57.788426 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:30:57.788433 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:30:57.788439 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 16 04:30:57.788445 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:30:57.788453 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:30:57.788460 kernel: CPU features: detected: Spectre-v4 Sep 16 04:30:57.788466 kernel: CPU features: detected: Spectre-BHB Sep 16 04:30:57.788472 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:30:57.788479 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:30:57.788485 kernel: CPU features: detected: ARM erratum 1418040 Sep 16 04:30:57.788491 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:30:57.788498 kernel: alternatives: applying boot alternatives Sep 16 04:30:57.788505 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:30:57.788512 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:30:57.788518 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:30:57.788526 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:30:57.788533 kernel: Fallback order for Node 0: 0 Sep 16 04:30:57.788539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 16 04:30:57.788545 kernel: Policy zone: DMA Sep 16 04:30:57.788552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:30:57.788558 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 16 04:30:57.788564 kernel: software IO TLB: area num 4. Sep 16 04:30:57.788571 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 16 04:30:57.788577 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 16 04:30:57.788584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:30:57.788590 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:30:57.788597 kernel: rcu: RCU event tracing is enabled. Sep 16 04:30:57.788605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:30:57.788620 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:30:57.788627 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:30:57.788633 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:30:57.788640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:30:57.788650 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:30:57.788659 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:30:57.788669 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:30:57.788677 kernel: GICv3: 256 SPIs implemented Sep 16 04:30:57.788686 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:30:57.788692 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:30:57.788701 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 16 04:30:57.788708 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 16 04:30:57.788714 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 16 04:30:57.788721 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 16 04:30:57.788727 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 16 04:30:57.788734 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 16 04:30:57.788740 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 16 04:30:57.788747 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 16 04:30:57.788753 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:30:57.788759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:30:57.788766 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 16 04:30:57.788772 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 16 04:30:57.788780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 16 04:30:57.788787 kernel: arm-pv: using stolen time PV Sep 16 04:30:57.788793 kernel: Console: colour dummy device 80x25 Sep 16 04:30:57.788800 kernel: ACPI: Core revision 20240827 Sep 16 04:30:57.788806 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 16 04:30:57.788813 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:30:57.788819 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:30:57.788826 kernel: landlock: Up and running. Sep 16 04:30:57.788832 kernel: SELinux: Initializing. Sep 16 04:30:57.788839 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:30:57.788847 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:30:57.788853 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:30:57.788860 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:30:57.788867 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:30:57.788873 kernel: Remapping and enabling EFI services. Sep 16 04:30:57.788879 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:30:57.788886 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:30:57.788892 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 16 04:30:57.788899 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 16 04:30:57.788911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:30:57.788918 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 16 04:30:57.788926 kernel: Detected PIPT I-cache on CPU2 Sep 16 04:30:57.788933 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 16 04:30:57.788940 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 16 04:30:57.788947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:30:57.788953 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 16 04:30:57.788961 kernel: Detected PIPT I-cache on CPU3 Sep 16 04:30:57.788969 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 16 04:30:57.788976 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 16 04:30:57.788983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:30:57.788989 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 16 04:30:57.789013 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:30:57.789020 kernel: SMP: Total of 4 processors activated. Sep 16 04:30:57.789027 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:30:57.789034 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:30:57.789041 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:30:57.789049 kernel: CPU features: detected: Common not Private translations Sep 16 04:30:57.789056 kernel: CPU features: detected: CRC32 instructions Sep 16 04:30:57.789063 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 16 04:30:57.789069 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:30:57.789076 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:30:57.789083 kernel: CPU features: detected: Privileged Access Never Sep 16 04:30:57.789090 kernel: CPU features: detected: RAS Extension Support Sep 16 04:30:57.789097 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:30:57.789104 kernel: alternatives: applying system-wide alternatives Sep 16 04:30:57.789112 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 16 04:30:57.789119 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 16 04:30:57.789126 kernel: devtmpfs: initialized Sep 16 04:30:57.789133 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:30:57.789140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:30:57.789147 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:30:57.789154 kernel: 0 pages in range for non-PLT usage Sep 16 04:30:57.789161 kernel: 508560 pages in range for PLT usage Sep 16 04:30:57.789167 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:30:57.789175 kernel: SMBIOS 3.0.0 present. Sep 16 04:30:57.789182 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 16 04:30:57.789189 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:30:57.789196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:30:57.789203 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:30:57.789209 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:30:57.789216 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:30:57.789223 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:30:57.789230 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 16 04:30:57.789238 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:30:57.789245 kernel: cpuidle: using governor menu Sep 16 04:30:57.789252 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:30:57.789259 kernel: ASID allocator initialised with 32768 entries Sep 16 04:30:57.789266 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:30:57.789272 kernel: Serial: AMBA PL011 UART driver Sep 16 04:30:57.789279 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:30:57.789286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:30:57.789293 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:30:57.789301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:30:57.789308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:30:57.789315 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:30:57.789322 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:30:57.789328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:30:57.789335 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:30:57.789342 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:30:57.789349 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:30:57.789356 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:30:57.789364 kernel: ACPI: Interpreter enabled Sep 16 04:30:57.789370 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:30:57.789377 kernel: ACPI: MCFG table detected, 1 entries Sep 16 04:30:57.789384 kernel: ACPI: CPU0 has been hot-added Sep 16 04:30:57.789391 kernel: ACPI: CPU1 has been hot-added Sep 16 04:30:57.789398 kernel: ACPI: CPU2 has been hot-added Sep 16 04:30:57.789404 kernel: ACPI: CPU3 has been hot-added Sep 16 04:30:57.789411 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:30:57.789418 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:30:57.789426 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:30:57.789548 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:30:57.789622 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 16 04:30:57.789683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 16 04:30:57.789740 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 16 04:30:57.789796 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 16 04:30:57.789805 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 16 04:30:57.789814 kernel: PCI host bridge to bus 0000:00 Sep 16 04:30:57.789881 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 16 04:30:57.789935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 16 04:30:57.789987 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 16 04:30:57.790058 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:30:57.790132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:30:57.790202 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:30:57.790264 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 16 04:30:57.790325 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 16 04:30:57.790384 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 16 04:30:57.790443 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 16 04:30:57.790503 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 16 04:30:57.790561 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 16 04:30:57.790649 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 16 04:30:57.790710 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 16 04:30:57.790763 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 16 04:30:57.790772 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 16 04:30:57.790779 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 16 04:30:57.790785 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 16 04:30:57.790792 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 16 04:30:57.790800 kernel: iommu: Default domain type: Translated Sep 16 04:30:57.790806 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:30:57.790815 kernel: efivars: Registered efivars operations Sep 16 04:30:57.790822 kernel: vgaarb: loaded Sep 16 04:30:57.790829 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:30:57.790835 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:30:57.790842 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:30:57.790849 kernel: pnp: PnP ACPI init Sep 16 04:30:57.790914 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 16 04:30:57.790924 kernel: pnp: PnP ACPI: found 1 devices Sep 16 04:30:57.790933 kernel: NET: Registered PF_INET protocol family Sep 16 04:30:57.790940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:30:57.790947 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:30:57.790954 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:30:57.790961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:30:57.790968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:30:57.790975 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:30:57.790982 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:30:57.790989 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:30:57.791009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:30:57.791017 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:30:57.791025 kernel: kvm [1]: HYP mode not available Sep 16 04:30:57.791031 kernel: Initialise system trusted keyrings Sep 16 04:30:57.791038 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:30:57.791045 kernel: Key type asymmetric registered Sep 16 04:30:57.791052 kernel: Asymmetric key parser 'x509' registered Sep 16 04:30:57.791059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:30:57.791066 kernel: io scheduler mq-deadline registered Sep 16 04:30:57.791074 kernel: io scheduler kyber registered Sep 16 04:30:57.791081 kernel: io scheduler bfq registered Sep 16 04:30:57.791088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 16 04:30:57.791095 kernel: ACPI: button: Power Button [PWRB] Sep 16 04:30:57.791102 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 16 04:30:57.791167 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 16 04:30:57.791176 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:30:57.791183 kernel: thunder_xcv, ver 1.0 Sep 16 04:30:57.791190 kernel: thunder_bgx, ver 1.0 Sep 16 04:30:57.791198 kernel: nicpf, ver 1.0 Sep 16 04:30:57.791205 kernel: nicvf, ver 1.0 Sep 16 04:30:57.791273 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:30:57.791328 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:30:57 UTC (1757997057) Sep 16 04:30:57.791338 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:30:57.791345 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:30:57.791352 kernel: watchdog: NMI not fully supported Sep 16 04:30:57.791359 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:30:57.791367 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:30:57.791374 kernel: Segment Routing with IPv6 Sep 16 04:30:57.791380 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:30:57.791387 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:30:57.791394 kernel: Key type dns_resolver registered Sep 16 04:30:57.791400 kernel: registered taskstats version 1 Sep 16 04:30:57.791407 kernel: Loading compiled-in X.509 certificates Sep 16 04:30:57.791414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:30:57.791421 kernel: Demotion targets for Node 0: null Sep 16 04:30:57.791429 kernel: Key type .fscrypt registered Sep 16 04:30:57.791436 kernel: Key type fscrypt-provisioning registered Sep 16 04:30:57.791443 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:30:57.791450 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:30:57.791457 kernel: ima: No architecture policies found Sep 16 04:30:57.791463 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:30:57.791470 kernel: clk: Disabling unused clocks Sep 16 04:30:57.791477 kernel: PM: genpd: Disabling unused power domains Sep 16 04:30:57.791484 kernel: Warning: unable to open an initial console. Sep 16 04:30:57.791492 kernel: Freeing unused kernel memory: 38976K Sep 16 04:30:57.791499 kernel: Run /init as init process Sep 16 04:30:57.791506 kernel: with arguments: Sep 16 04:30:57.791513 kernel: /init Sep 16 04:30:57.791519 kernel: with environment: Sep 16 04:30:57.791526 kernel: HOME=/ Sep 16 04:30:57.791533 kernel: TERM=linux Sep 16 04:30:57.791540 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:30:57.791547 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:30:57.791558 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:30:57.791566 systemd[1]: Detected virtualization kvm. Sep 16 04:30:57.791573 systemd[1]: Detected architecture arm64. Sep 16 04:30:57.791580 systemd[1]: Running in initrd. Sep 16 04:30:57.791587 systemd[1]: No hostname configured, using default hostname. Sep 16 04:30:57.791595 systemd[1]: Hostname set to . Sep 16 04:30:57.791602 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:30:57.791617 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:30:57.791625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:30:57.791632 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:30:57.791640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:30:57.791648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:30:57.791655 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:30:57.791663 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:30:57.791673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:30:57.791680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:30:57.791688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:30:57.791695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:30:57.791703 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:30:57.791710 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:30:57.791717 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:30:57.791725 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:30:57.791733 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:30:57.791741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:30:57.791748 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:30:57.791755 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:30:57.791763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:30:57.791770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:30:57.791778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:30:57.791785 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:30:57.791793 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:30:57.791802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:30:57.791809 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:30:57.791817 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:30:57.791824 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:30:57.791832 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:30:57.791839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:30:57.791846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:30:57.791854 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:30:57.791863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:30:57.791871 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:30:57.791878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:30:57.791900 systemd-journald[244]: Collecting audit messages is disabled. Sep 16 04:30:57.791919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:30:57.791927 systemd-journald[244]: Journal started Sep 16 04:30:57.791945 systemd-journald[244]: Runtime Journal (/run/log/journal/c046abd964494ab0a13b00937f914b8e) is 6M, max 48.5M, 42.4M free. Sep 16 04:30:57.785339 systemd-modules-load[246]: Inserted module 'overlay' Sep 16 04:30:57.796009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:30:57.797853 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:30:57.799625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:30:57.801381 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:30:57.802210 kernel: Bridge firewalling registered Sep 16 04:30:57.801850 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 16 04:30:57.802638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:30:57.805227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:30:57.808906 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:30:57.809193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:30:57.813182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:30:57.815970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:30:57.817131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:30:57.819924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:30:57.820958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:30:57.823028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:30:57.825668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:30:57.834923 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:30:57.863682 systemd-resolved[289]: Positive Trust Anchors: Sep 16 04:30:57.863698 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:30:57.863733 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:30:57.868457 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 16 04:30:57.870586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:30:57.871535 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:30:57.902024 kernel: SCSI subsystem initialized Sep 16 04:30:57.907018 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:30:57.914022 kernel: iscsi: registered transport (tcp) Sep 16 04:30:57.927025 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:30:57.927062 kernel: QLogic iSCSI HBA Driver Sep 16 04:30:57.942809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:30:57.958193 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:30:57.960164 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:30:58.001302 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:30:58.003889 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:30:58.062025 kernel: raid6: neonx8 gen() 15518 MB/s Sep 16 04:30:58.079009 kernel: raid6: neonx4 gen() 15754 MB/s Sep 16 04:30:58.096012 kernel: raid6: neonx2 gen() 13258 MB/s Sep 16 04:30:58.113013 kernel: raid6: neonx1 gen() 10368 MB/s Sep 16 04:30:58.130034 kernel: raid6: int64x8 gen() 6678 MB/s Sep 16 04:30:58.147015 kernel: raid6: int64x4 gen() 7265 MB/s Sep 16 04:30:58.164019 kernel: raid6: int64x2 gen() 6074 MB/s Sep 16 04:30:58.181016 kernel: raid6: int64x1 gen() 5022 MB/s Sep 16 04:30:58.181032 kernel: raid6: using algorithm neonx4 gen() 15754 MB/s Sep 16 04:30:58.198023 kernel: raid6: .... xor() 12333 MB/s, rmw enabled Sep 16 04:30:58.198037 kernel: raid6: using neon recovery algorithm Sep 16 04:30:58.203056 kernel: xor: measuring software checksum speed Sep 16 04:30:58.203089 kernel: 8regs : 20582 MB/sec Sep 16 04:30:58.204107 kernel: 32regs : 21681 MB/sec Sep 16 04:30:58.204120 kernel: arm64_neon : 27007 MB/sec Sep 16 04:30:58.204131 kernel: xor: using function: arm64_neon (27007 MB/sec) Sep 16 04:30:58.264020 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:30:58.270431 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:30:58.272691 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:30:58.296108 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 16 04:30:58.300372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:30:58.302531 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:30:58.334299 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 16 04:30:58.355286 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:30:58.360492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:30:58.415006 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:30:58.417983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:30:58.464014 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 16 04:30:58.467025 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:30:58.470255 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:30:58.470292 kernel: GPT:9289727 != 19775487 Sep 16 04:30:58.470304 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:30:58.470319 kernel: GPT:9289727 != 19775487 Sep 16 04:30:58.470328 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:30:58.471006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:30:58.471334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:30:58.471447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:30:58.478131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:30:58.482761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:30:58.506192 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:30:58.514741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:30:58.516601 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:30:58.518812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:30:58.527618 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:30:58.539029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:30:58.540108 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:30:58.542566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:30:58.544443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:30:58.546253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:30:58.548698 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:30:58.550443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:30:58.566362 disk-uuid[592]: Primary Header is updated. Sep 16 04:30:58.566362 disk-uuid[592]: Secondary Entries is updated. Sep 16 04:30:58.566362 disk-uuid[592]: Secondary Header is updated. Sep 16 04:30:58.569336 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:30:58.573037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:30:58.576015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:30:59.577821 disk-uuid[595]: The operation has completed successfully. Sep 16 04:30:59.579038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:30:59.605573 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:30:59.605699 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:30:59.629320 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:30:59.657934 sh[612]: Success Sep 16 04:30:59.670221 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:30:59.670259 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:30:59.671171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:30:59.678075 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:30:59.700843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:30:59.703420 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:30:59.718217 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:30:59.723006 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Sep 16 04:30:59.725027 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:30:59.725082 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:30:59.729008 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:30:59.729047 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:30:59.729865 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:30:59.730952 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:30:59.731937 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:30:59.732696 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:30:59.735308 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:30:59.758937 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 16 04:30:59.758985 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:30:59.759010 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:30:59.761449 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:30:59.761481 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:30:59.766018 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:30:59.766823 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:30:59.769563 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:30:59.832382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:30:59.834987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:30:59.869407 systemd-networkd[803]: lo: Link UP Sep 16 04:30:59.869421 systemd-networkd[803]: lo: Gained carrier Sep 16 04:30:59.870271 systemd-networkd[803]: Enumeration completed Sep 16 04:30:59.870736 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:30:59.870796 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:30:59.872906 ignition[703]: Ignition 2.22.0 Sep 16 04:30:59.870799 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:30:59.872912 ignition[703]: Stage: fetch-offline Sep 16 04:30:59.871545 systemd-networkd[803]: eth0: Link UP Sep 16 04:30:59.872939 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:30:59.871654 systemd-networkd[803]: eth0: Gained carrier Sep 16 04:30:59.872946 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:30:59.871663 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:30:59.873043 ignition[703]: parsed url from cmdline: "" Sep 16 04:30:59.873382 systemd[1]: Reached target network.target - Network. Sep 16 04:30:59.873047 ignition[703]: no config URL provided Sep 16 04:30:59.873051 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:30:59.873058 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:30:59.873075 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 16 04:30:59.873079 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:30:59.879569 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 16 04:30:59.894040 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:30:59.924908 ignition[703]: parsing config with SHA512: 4ad2105ac7f4ff598f26d807f42a36e4f3dbac6eae5960f56628bec69d6a5fd0de42ec712aca436e6b8bb5d04b09f480ed80ac49e65ed7a66365b69ec00018f2 Sep 16 04:30:59.930448 unknown[703]: fetched base config from "system" Sep 16 04:30:59.931107 unknown[703]: fetched user config from "qemu" Sep 16 04:30:59.931487 ignition[703]: fetch-offline: fetch-offline passed Sep 16 04:30:59.931550 ignition[703]: Ignition finished successfully Sep 16 04:30:59.932961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:30:59.934207 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:30:59.935042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:30:59.968060 ignition[812]: Ignition 2.22.0 Sep 16 04:30:59.968077 ignition[812]: Stage: kargs Sep 16 04:30:59.968212 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:30:59.968221 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:30:59.968971 ignition[812]: kargs: kargs passed Sep 16 04:30:59.969031 ignition[812]: Ignition finished successfully Sep 16 04:30:59.973478 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:30:59.976146 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:30:59.999339 ignition[821]: Ignition 2.22.0 Sep 16 04:30:59.999353 ignition[821]: Stage: disks Sep 16 04:30:59.999482 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:30:59.999491 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:31:00.000258 ignition[821]: disks: disks passed Sep 16 04:31:00.002292 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:31:00.000298 ignition[821]: Ignition finished successfully Sep 16 04:31:00.004014 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:31:00.005060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:31:00.006659 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:31:00.008211 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:31:00.009627 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:31:00.011703 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:31:00.033874 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:31:00.038686 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:31:00.041884 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:31:00.101034 kernel: EXT4-fs (vda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:31:00.100905 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:31:00.101971 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:31:00.104457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:31:00.106314 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:31:00.107186 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:31:00.107227 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:31:00.107249 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:31:00.117296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:31:00.119040 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:31:00.122015 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 16 04:31:00.124745 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:31:00.124771 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:31:00.124783 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:31:00.124796 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:31:00.126278 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:31:00.155371 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:31:00.159169 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:31:00.162115 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:31:00.165687 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:31:00.228656 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:31:00.230728 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:31:00.232141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:31:00.251059 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:31:00.261218 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:31:00.279213 ignition[954]: INFO : Ignition 2.22.0 Sep 16 04:31:00.279213 ignition[954]: INFO : Stage: mount Sep 16 04:31:00.280471 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:31:00.280471 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:31:00.280471 ignition[954]: INFO : mount: mount passed Sep 16 04:31:00.280471 ignition[954]: INFO : Ignition finished successfully Sep 16 04:31:00.282156 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:31:00.286012 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:31:00.849952 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:31:00.855430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:31:00.883026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 16 04:31:00.885049 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:31:00.885070 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:31:00.888116 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:31:00.888148 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:31:00.889387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:31:00.935397 ignition[983]: INFO : Ignition 2.22.0 Sep 16 04:31:00.935397 ignition[983]: INFO : Stage: files Sep 16 04:31:00.936714 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:31:00.936714 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:31:00.936714 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:31:00.939566 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:31:00.939566 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:31:00.939566 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:31:00.939566 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:31:00.939566 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:31:00.939098 unknown[983]: wrote ssh authorized keys file for user: core Sep 16 04:31:00.945360 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:31:00.945360 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 16 04:31:00.946819 systemd-networkd[803]: eth0: Gained IPv6LL Sep 16 04:31:01.010155 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:31:01.289316 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:31:01.289316 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:31:01.289316 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 16 04:31:01.518396 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:31:01.643362 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:31:01.643362 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:31:01.647056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:31:01.657133 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:31:01.657133 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:31:01.657133 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:31:01.662045 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:31:01.662045 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:31:01.662045 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 16 04:31:01.978664 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:31:03.100123 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:31:03.100123 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:31:03.103310 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:31:03.106696 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:31:03.106696 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:31:03.106696 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 04:31:03.111431 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:31:03.111431 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:31:03.111431 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 04:31:03.111431 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:31:03.120775 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:31:03.123561 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:31:03.124715 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:31:03.124715 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:31:03.124715 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:31:03.124715 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:31:03.124715 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:31:03.124715 ignition[983]: INFO : files: files passed Sep 16 04:31:03.124715 ignition[983]: INFO : Ignition finished successfully Sep 16 04:31:03.128456 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:31:03.131154 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:31:03.136529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:31:03.150285 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:31:03.150380 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:31:03.152651 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:31:03.155021 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:31:03.155021 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:31:03.157591 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:31:03.158226 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:31:03.159809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:31:03.162066 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:31:03.196059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:31:03.198036 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:31:03.199383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:31:03.203457 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:31:03.204791 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:31:03.205558 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:31:03.228026 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:31:03.230034 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:31:03.251855 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:31:03.252920 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:31:03.254528 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:31:03.255901 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:31:03.256038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:31:03.257946 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:31:03.259541 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:31:03.260804 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:31:03.262317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:31:03.263715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:31:03.265169 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:31:03.266838 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:31:03.268206 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:31:03.269921 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:31:03.271427 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:31:03.272774 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:31:03.273926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:31:03.274068 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:31:03.275806 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:31:03.277366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:31:03.278864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:31:03.283061 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:31:03.284011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:31:03.284132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:31:03.286353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:31:03.286461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:31:03.288027 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:31:03.289278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:31:03.289382 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:31:03.290932 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:31:03.292096 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:31:03.293500 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:31:03.293582 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:31:03.295110 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:31:03.295183 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:31:03.296369 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:31:03.296478 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:31:03.297816 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:31:03.297909 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:31:03.299809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:31:03.301363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:31:03.302724 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:31:03.302852 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:31:03.304750 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:31:03.304840 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:31:03.312496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:31:03.314203 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:31:03.322800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:31:03.332379 ignition[1040]: INFO : Ignition 2.22.0 Sep 16 04:31:03.333261 ignition[1040]: INFO : Stage: umount Sep 16 04:31:03.333989 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:31:03.335673 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:31:03.335673 ignition[1040]: INFO : umount: umount passed Sep 16 04:31:03.335673 ignition[1040]: INFO : Ignition finished successfully Sep 16 04:31:03.338235 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:31:03.338756 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:31:03.345722 systemd[1]: Stopped target network.target - Network. Sep 16 04:31:03.346672 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:31:03.346729 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:31:03.347949 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:31:03.347988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:31:03.349272 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:31:03.349313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:31:03.350555 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:31:03.350601 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:31:03.351915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:31:03.353183 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:31:03.363703 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:31:03.363806 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:31:03.377806 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:31:03.379250 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:31:03.380092 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:31:03.382539 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:31:03.385130 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:31:03.386052 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:31:03.386090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:31:03.389634 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:31:03.391660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:31:03.391721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:31:03.393182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:31:03.393223 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:31:03.396966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:31:03.397021 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:31:03.397948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:31:03.397985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:31:03.400310 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:31:03.403411 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:31:03.403466 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:31:03.403731 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:31:03.405633 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:31:03.408668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:31:03.408754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:31:03.417722 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:31:03.427288 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:31:03.428519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:31:03.428554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:31:03.429881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:31:03.429911 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:31:03.431366 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:31:03.431409 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:31:03.433603 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:31:03.433648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:31:03.435667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:31:03.435714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:31:03.438598 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:31:03.439974 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:31:03.440049 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:31:03.442688 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:31:03.442730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:31:03.445240 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:31:03.445279 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:31:03.447787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:31:03.447828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:31:03.449559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:31:03.449606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:31:03.453178 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:31:03.453228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 04:31:03.453256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:31:03.453286 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:31:03.453523 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:31:03.455017 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:31:03.458117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:31:03.458224 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:31:03.460125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:31:03.461872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:31:03.483717 systemd[1]: Switching root. Sep 16 04:31:03.520086 systemd-journald[244]: Journal stopped Sep 16 04:31:04.299609 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 16 04:31:04.299656 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:31:04.299668 kernel: SELinux: policy capability open_perms=1 Sep 16 04:31:04.299678 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:31:04.299690 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:31:04.299701 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:31:04.299712 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:31:04.299722 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:31:04.299732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:31:04.299741 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:31:04.299751 kernel: audit: type=1403 audit(1757997063.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:31:04.299765 systemd[1]: Successfully loaded SELinux policy in 51.660ms. Sep 16 04:31:04.299784 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.227ms. Sep 16 04:31:04.299795 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:31:04.299807 systemd[1]: Detected virtualization kvm. Sep 16 04:31:04.299818 systemd[1]: Detected architecture arm64. Sep 16 04:31:04.299829 systemd[1]: Detected first boot. Sep 16 04:31:04.299839 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:31:04.299849 zram_generator::config[1085]: No configuration found. Sep 16 04:31:04.299860 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:31:04.299873 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:31:04.299883 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:31:04.299897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:31:04.299907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:31:04.299917 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:31:04.299927 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:31:04.299937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:31:04.299946 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:31:04.299956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:31:04.299967 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:31:04.299977 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:31:04.299989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:31:04.300021 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:31:04.300032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:31:04.300042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:31:04.300055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:31:04.300065 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:31:04.300075 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:31:04.300086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:31:04.300095 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:31:04.300108 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:31:04.300118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:31:04.300128 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:31:04.300138 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:31:04.300148 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:31:04.300158 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:31:04.300167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:31:04.300177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:31:04.300189 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:31:04.300199 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:31:04.300209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:31:04.300219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:31:04.300229 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:31:04.300238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:31:04.300248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:31:04.300258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:31:04.300269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:31:04.300283 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:31:04.300292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:31:04.300302 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:31:04.300312 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:31:04.300321 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:31:04.300331 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:31:04.300341 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:31:04.300351 systemd[1]: Reached target machines.target - Containers. Sep 16 04:31:04.300361 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:31:04.300373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:31:04.300383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:31:04.300394 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:31:04.300404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:31:04.300415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:31:04.300425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:31:04.300435 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:31:04.300446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:31:04.300458 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:31:04.300468 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:31:04.300478 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:31:04.300488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:31:04.300498 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:31:04.300508 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:31:04.300518 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:31:04.300527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:31:04.300537 kernel: loop: module loaded Sep 16 04:31:04.300548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:31:04.300558 kernel: fuse: init (API version 7.41) Sep 16 04:31:04.300567 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:31:04.300577 kernel: ACPI: bus type drm_connector registered Sep 16 04:31:04.300592 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:31:04.300604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:31:04.300615 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:31:04.300625 systemd[1]: Stopped verity-setup.service. Sep 16 04:31:04.300635 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:31:04.300646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:31:04.300656 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:31:04.300667 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:31:04.300676 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:31:04.300687 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:31:04.300697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:31:04.300729 systemd-journald[1153]: Collecting audit messages is disabled. Sep 16 04:31:04.300751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:31:04.300762 systemd-journald[1153]: Journal started Sep 16 04:31:04.300783 systemd-journald[1153]: Runtime Journal (/run/log/journal/c046abd964494ab0a13b00937f914b8e) is 6M, max 48.5M, 42.4M free. Sep 16 04:31:04.077918 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:31:04.092905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:31:04.093290 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:31:04.304013 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:31:04.304395 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:31:04.304566 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:31:04.305773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:31:04.308043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:31:04.309093 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:31:04.309242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:31:04.310261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:31:04.311091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:31:04.312203 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:31:04.312357 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:31:04.313395 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:31:04.313550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:31:04.314779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:31:04.315937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:31:04.317230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:31:04.318577 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:31:04.329401 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:31:04.331387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:31:04.335149 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:31:04.335960 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:31:04.336005 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:31:04.337567 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:31:04.344740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:31:04.345703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:31:04.347058 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:31:04.348817 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:31:04.349953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:31:04.350831 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:31:04.351964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:31:04.358404 systemd-journald[1153]: Time spent on flushing to /var/log/journal/c046abd964494ab0a13b00937f914b8e is 22.483ms for 890 entries. Sep 16 04:31:04.358404 systemd-journald[1153]: System Journal (/var/log/journal/c046abd964494ab0a13b00937f914b8e) is 8M, max 195.6M, 187.6M free. Sep 16 04:31:04.389749 systemd-journald[1153]: Received client request to flush runtime journal. Sep 16 04:31:04.389788 kernel: loop0: detected capacity change from 0 to 207008 Sep 16 04:31:04.389800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:31:04.353187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:31:04.356649 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:31:04.360223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:31:04.364027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:31:04.365113 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:31:04.367827 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:31:04.383518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:31:04.389664 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:31:04.390676 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 16 04:31:04.390686 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 16 04:31:04.391772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:31:04.393821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:31:04.397069 kernel: loop1: detected capacity change from 0 to 100632 Sep 16 04:31:04.397186 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:31:04.400422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:31:04.403120 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:31:04.422027 kernel: loop2: detected capacity change from 0 to 119368 Sep 16 04:31:04.431337 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:31:04.434024 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:31:04.436507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:31:04.456028 kernel: loop3: detected capacity change from 0 to 207008 Sep 16 04:31:04.457666 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 16 04:31:04.457904 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 16 04:31:04.460819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:31:04.462018 kernel: loop4: detected capacity change from 0 to 100632 Sep 16 04:31:04.467027 kernel: loop5: detected capacity change from 0 to 119368 Sep 16 04:31:04.469741 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:31:04.470105 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 16 04:31:04.475437 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:31:04.475457 systemd[1]: Reloading... Sep 16 04:31:04.528024 zram_generator::config[1252]: No configuration found. Sep 16 04:31:04.605635 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:31:04.681094 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:31:04.681418 systemd[1]: Reloading finished in 205 ms. Sep 16 04:31:04.711497 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:31:04.714024 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:31:04.725117 systemd[1]: Starting ensure-sysext.service... Sep 16 04:31:04.726614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:31:04.735162 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:31:04.735176 systemd[1]: Reloading... Sep 16 04:31:04.739966 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:31:04.740324 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:31:04.740641 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:31:04.740930 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:31:04.741684 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:31:04.742089 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 16 04:31:04.742206 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 16 04:31:04.744965 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:31:04.745086 systemd-tmpfiles[1289]: Skipping /boot Sep 16 04:31:04.751401 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:31:04.751495 systemd-tmpfiles[1289]: Skipping /boot Sep 16 04:31:04.777034 zram_generator::config[1316]: No configuration found. Sep 16 04:31:04.907169 systemd[1]: Reloading finished in 171 ms. Sep 16 04:31:04.918070 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:31:04.924045 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:31:04.933885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:31:04.936101 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:31:04.937868 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:31:04.941542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:31:04.944217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:31:04.946884 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:31:04.952529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:31:04.958950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:31:04.962878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:31:04.965194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:31:04.966190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:31:04.966296 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:31:04.969184 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:31:04.971081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:31:04.972676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:31:04.972811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:31:04.974275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:31:04.974423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:31:04.976216 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:31:04.976363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:31:04.977921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:31:04.986631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:31:04.987869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:31:04.989941 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Sep 16 04:31:04.990439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:31:04.992641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:31:04.993607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:31:04.993764 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:31:04.996643 augenrules[1389]: No rules Sep 16 04:31:04.997682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:31:05.000152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:31:05.001842 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:31:05.002054 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:31:05.003409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:31:05.003548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:31:05.004955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:31:05.005104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:31:05.006440 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:31:05.006566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:31:05.008013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:31:05.011113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:31:05.017736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:31:05.025442 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:31:05.029024 systemd[1]: Finished ensure-sysext.service. Sep 16 04:31:05.040503 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:31:05.041533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:31:05.043854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:31:05.048952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:31:05.067109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:31:05.069413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:31:05.070257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:31:05.070301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:31:05.071656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:31:05.075663 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:31:05.076512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:31:05.077023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:31:05.079027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:31:05.080293 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:31:05.080452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:31:05.089364 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:31:05.093689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:31:05.093880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:31:05.099781 augenrules[1435]: /sbin/augenrules: No change Sep 16 04:31:05.101615 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:31:05.101768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:31:05.111722 augenrules[1471]: No rules Sep 16 04:31:05.114267 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:31:05.114459 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:31:05.118920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:31:05.118978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:31:05.130454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:31:05.132736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:31:05.157766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:31:05.180205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:31:05.216409 systemd-networkd[1444]: lo: Link UP Sep 16 04:31:05.216416 systemd-networkd[1444]: lo: Gained carrier Sep 16 04:31:05.217204 systemd-networkd[1444]: Enumeration completed Sep 16 04:31:05.217304 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:31:05.218127 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:31:05.218139 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:31:05.218656 systemd-networkd[1444]: eth0: Link UP Sep 16 04:31:05.218760 systemd-networkd[1444]: eth0: Gained carrier Sep 16 04:31:05.218781 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:31:05.219524 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:31:05.221314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:31:05.231041 systemd-networkd[1444]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:31:05.240621 systemd-resolved[1355]: Positive Trust Anchors: Sep 16 04:31:05.240638 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:31:05.240669 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:31:05.248628 systemd-resolved[1355]: Defaulting to hostname 'linux'. Sep 16 04:31:05.250194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:31:05.251037 systemd[1]: Reached target network.target - Network. Sep 16 04:31:05.252076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:31:05.252936 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:31:05.254145 systemd-timesyncd[1451]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:31:05.254466 systemd-timesyncd[1451]: Initial clock synchronization to Tue 2025-09-16 04:31:05.373878 UTC. Sep 16 04:31:05.256168 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:31:05.257601 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:31:05.268376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:31:05.270289 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:31:05.271160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:31:05.272088 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:31:05.273130 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:31:05.273974 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:31:05.274962 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:31:05.275823 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:31:05.275849 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:31:05.276588 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:31:05.277984 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:31:05.279836 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:31:05.282336 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:31:05.283406 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:31:05.284367 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:31:05.286887 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:31:05.288011 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:31:05.289322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:31:05.290160 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:31:05.290852 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:31:05.291627 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:31:05.291654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:31:05.292480 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:31:05.294109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:31:05.295612 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:31:05.298089 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:31:05.299672 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:31:05.300605 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:31:05.301568 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:31:05.303186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:31:05.303944 jq[1510]: false Sep 16 04:31:05.305761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:31:05.309016 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:31:05.314120 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:31:05.315682 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:31:05.316099 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:31:05.316816 extend-filesystems[1511]: Found /dev/vda6 Sep 16 04:31:05.317732 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:31:05.319952 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:31:05.323519 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:31:05.326058 extend-filesystems[1511]: Found /dev/vda9 Sep 16 04:31:05.325064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:31:05.325238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:31:05.325460 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:31:05.326096 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:31:05.328225 extend-filesystems[1511]: Checking size of /dev/vda9 Sep 16 04:31:05.329401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:31:05.329555 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:31:05.337467 jq[1529]: true Sep 16 04:31:05.340304 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:31:05.349484 tar[1533]: linux-arm64/LICENSE Sep 16 04:31:05.349484 tar[1533]: linux-arm64/helm Sep 16 04:31:05.350488 extend-filesystems[1511]: Resized partition /dev/vda9 Sep 16 04:31:05.353054 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:31:05.360053 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:31:05.360345 update_engine[1526]: I20250916 04:31:05.360078 1526 main.cc:92] Flatcar Update Engine starting Sep 16 04:31:05.377177 dbus-daemon[1508]: [system] SELinux support is enabled Sep 16 04:31:05.380210 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:31:05.389534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:31:05.389561 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:31:05.392159 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:31:05.392179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:31:05.393426 update_engine[1526]: I20250916 04:31:05.393342 1526 update_check_scheduler.cc:74] Next update check in 2m8s Sep 16 04:31:05.393774 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:31:05.393947 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (Power Button) Sep 16 04:31:05.395232 systemd-logind[1521]: New seat seat0. Sep 16 04:31:05.397856 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:31:05.399308 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:31:05.407232 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:31:05.418075 jq[1546]: true Sep 16 04:31:05.419138 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:31:05.419138 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:31:05.419138 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:31:05.422671 extend-filesystems[1511]: Resized filesystem in /dev/vda9 Sep 16 04:31:05.422558 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:31:05.422817 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:31:05.452921 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:31:05.454157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:31:05.457661 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:31:05.470829 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:31:05.538484 containerd[1535]: time="2025-09-16T04:31:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:31:05.541004 containerd[1535]: time="2025-09-16T04:31:05.539859280Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:31:05.550988 containerd[1535]: time="2025-09-16T04:31:05.550942800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.2µs" Sep 16 04:31:05.550988 containerd[1535]: time="2025-09-16T04:31:05.550977920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:31:05.550988 containerd[1535]: time="2025-09-16T04:31:05.551010560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:31:05.551176 containerd[1535]: time="2025-09-16T04:31:05.551154960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:31:05.551220 containerd[1535]: time="2025-09-16T04:31:05.551175480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:31:05.551220 containerd[1535]: time="2025-09-16T04:31:05.551197920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551266 containerd[1535]: time="2025-09-16T04:31:05.551245280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551266 containerd[1535]: time="2025-09-16T04:31:05.551263600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551494 containerd[1535]: time="2025-09-16T04:31:05.551472440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551494 containerd[1535]: time="2025-09-16T04:31:05.551492720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551545 containerd[1535]: time="2025-09-16T04:31:05.551503880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551545 containerd[1535]: time="2025-09-16T04:31:05.551511960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551606 containerd[1535]: time="2025-09-16T04:31:05.551586120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551792 containerd[1535]: time="2025-09-16T04:31:05.551770840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551819 containerd[1535]: time="2025-09-16T04:31:05.551803080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:31:05.551819 containerd[1535]: time="2025-09-16T04:31:05.551812720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:31:05.551858 containerd[1535]: time="2025-09-16T04:31:05.551843040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:31:05.552221 containerd[1535]: time="2025-09-16T04:31:05.552192240Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:31:05.552352 containerd[1535]: time="2025-09-16T04:31:05.552326040Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:31:05.555661 containerd[1535]: time="2025-09-16T04:31:05.555622800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:31:05.555711 containerd[1535]: time="2025-09-16T04:31:05.555694160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:31:05.555730 containerd[1535]: time="2025-09-16T04:31:05.555709920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:31:05.555730 containerd[1535]: time="2025-09-16T04:31:05.555722280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:31:05.555779 containerd[1535]: time="2025-09-16T04:31:05.555734320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:31:05.555779 containerd[1535]: time="2025-09-16T04:31:05.555744720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:31:05.555779 containerd[1535]: time="2025-09-16T04:31:05.555756880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:31:05.555779 containerd[1535]: time="2025-09-16T04:31:05.555776080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:31:05.555844 containerd[1535]: time="2025-09-16T04:31:05.555789080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:31:05.555844 containerd[1535]: time="2025-09-16T04:31:05.555799040Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:31:05.555844 containerd[1535]: time="2025-09-16T04:31:05.555807760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:31:05.555844 containerd[1535]: time="2025-09-16T04:31:05.555818960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:31:05.555958 containerd[1535]: time="2025-09-16T04:31:05.555932520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:31:05.555986 containerd[1535]: time="2025-09-16T04:31:05.555962840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:31:05.555986 containerd[1535]: time="2025-09-16T04:31:05.555982760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:31:05.556087 containerd[1535]: time="2025-09-16T04:31:05.556066720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:31:05.556111 containerd[1535]: time="2025-09-16T04:31:05.556090160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:31:05.556135 containerd[1535]: time="2025-09-16T04:31:05.556116600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:31:05.556135 containerd[1535]: time="2025-09-16T04:31:05.556128480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:31:05.556173 containerd[1535]: time="2025-09-16T04:31:05.556138120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:31:05.556173 containerd[1535]: time="2025-09-16T04:31:05.556149840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:31:05.556173 containerd[1535]: time="2025-09-16T04:31:05.556160160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:31:05.556173 containerd[1535]: time="2025-09-16T04:31:05.556172480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:31:05.556417 containerd[1535]: time="2025-09-16T04:31:05.556394000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:31:05.556445 containerd[1535]: time="2025-09-16T04:31:05.556419560Z" level=info msg="Start snapshots syncer" Sep 16 04:31:05.556463 containerd[1535]: time="2025-09-16T04:31:05.556452160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:31:05.557751 containerd[1535]: time="2025-09-16T04:31:05.557142840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:31:05.557863 containerd[1535]: time="2025-09-16T04:31:05.557774880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:31:05.558019 containerd[1535]: time="2025-09-16T04:31:05.557872880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:31:05.558191 containerd[1535]: time="2025-09-16T04:31:05.558153840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:31:05.558224 containerd[1535]: time="2025-09-16T04:31:05.558210200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:31:05.558246 containerd[1535]: time="2025-09-16T04:31:05.558229160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:31:05.558246 containerd[1535]: time="2025-09-16T04:31:05.558240200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:31:05.558290 containerd[1535]: time="2025-09-16T04:31:05.558254720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:31:05.558290 containerd[1535]: time="2025-09-16T04:31:05.558268880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:31:05.558290 containerd[1535]: time="2025-09-16T04:31:05.558282640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:31:05.558337 containerd[1535]: time="2025-09-16T04:31:05.558318640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:31:05.558354 containerd[1535]: time="2025-09-16T04:31:05.558335440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:31:05.558371 containerd[1535]: time="2025-09-16T04:31:05.558350560Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:31:05.558409 containerd[1535]: time="2025-09-16T04:31:05.558391680Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558411360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558447560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558481120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558491160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558548080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558697280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558828000Z" level=info msg="runtime interface created" Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558836560Z" level=info msg="created NRI interface" Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558851480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558865560Z" level=info msg="Connect containerd service" Sep 16 04:31:05.559829 containerd[1535]: time="2025-09-16T04:31:05.558896640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:31:05.560053 containerd[1535]: time="2025-09-16T04:31:05.559863600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:31:05.627552 containerd[1535]: time="2025-09-16T04:31:05.627389200Z" level=info msg="Start subscribing containerd event" Sep 16 04:31:05.627967 containerd[1535]: time="2025-09-16T04:31:05.627568640Z" level=info msg="Start recovering state" Sep 16 04:31:05.628070 containerd[1535]: time="2025-09-16T04:31:05.628052680Z" level=info msg="Start event monitor" Sep 16 04:31:05.628098 containerd[1535]: time="2025-09-16T04:31:05.628078480Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:31:05.628098 containerd[1535]: time="2025-09-16T04:31:05.628090840Z" level=info msg="Start streaming server" Sep 16 04:31:05.628132 containerd[1535]: time="2025-09-16T04:31:05.628099960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:31:05.628132 containerd[1535]: time="2025-09-16T04:31:05.628107240Z" level=info msg="runtime interface starting up..." Sep 16 04:31:05.628132 containerd[1535]: time="2025-09-16T04:31:05.628113200Z" level=info msg="starting plugins..." Sep 16 04:31:05.628132 containerd[1535]: time="2025-09-16T04:31:05.628127000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:31:05.628481 containerd[1535]: time="2025-09-16T04:31:05.628404920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:31:05.628541 containerd[1535]: time="2025-09-16T04:31:05.628521440Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:31:05.629116 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:31:05.629207 containerd[1535]: time="2025-09-16T04:31:05.629127760Z" level=info msg="containerd successfully booted in 0.090997s" Sep 16 04:31:05.649442 tar[1533]: linux-arm64/README.md Sep 16 04:31:05.666026 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:31:06.166450 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:31:06.186024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:31:06.188981 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:31:06.208173 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:31:06.208370 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:31:06.211362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:31:06.242399 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:31:06.245074 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:31:06.246964 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:31:06.248161 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:31:07.283077 systemd-networkd[1444]: eth0: Gained IPv6LL Sep 16 04:31:07.286422 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:31:07.289156 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:31:07.291805 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:31:07.294129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:07.305842 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:31:07.329510 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:31:07.331237 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:31:07.331467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:31:07.333446 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:31:07.891579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:07.893054 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:31:07.895058 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:31:07.898173 systemd[1]: Startup finished in 2.026s (kernel) + 6.087s (initrd) + 4.245s (userspace) = 12.359s. Sep 16 04:31:08.276128 kubelet[1641]: E0916 04:31:08.275974 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:31:08.279774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:31:08.279930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:31:08.280259 systemd[1]: kubelet.service: Consumed 742ms CPU time, 256.7M memory peak. Sep 16 04:31:10.060465 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:31:10.061540 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:40614.service - OpenSSH per-connection server daemon (10.0.0.1:40614). Sep 16 04:31:10.137597 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 40614 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:10.139343 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:10.145535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:31:10.146470 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:31:10.153613 systemd-logind[1521]: New session 1 of user core. Sep 16 04:31:10.168788 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:31:10.171667 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:31:10.186156 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:31:10.188713 systemd-logind[1521]: New session c1 of user core. Sep 16 04:31:10.298320 systemd[1659]: Queued start job for default target default.target. Sep 16 04:31:10.316981 systemd[1659]: Created slice app.slice - User Application Slice. Sep 16 04:31:10.317032 systemd[1659]: Reached target paths.target - Paths. Sep 16 04:31:10.317072 systemd[1659]: Reached target timers.target - Timers. Sep 16 04:31:10.318270 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:31:10.327584 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:31:10.327644 systemd[1659]: Reached target sockets.target - Sockets. Sep 16 04:31:10.327680 systemd[1659]: Reached target basic.target - Basic System. Sep 16 04:31:10.327706 systemd[1659]: Reached target default.target - Main User Target. Sep 16 04:31:10.327731 systemd[1659]: Startup finished in 132ms. Sep 16 04:31:10.327844 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:31:10.329096 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:31:10.397407 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:40622.service - OpenSSH per-connection server daemon (10.0.0.1:40622). Sep 16 04:31:10.446414 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 40622 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:10.447651 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:10.452067 systemd-logind[1521]: New session 2 of user core. Sep 16 04:31:10.466181 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:31:10.518454 sshd[1673]: Connection closed by 10.0.0.1 port 40622 Sep 16 04:31:10.518940 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:10.531132 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:40622.service: Deactivated successfully. Sep 16 04:31:10.532647 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:31:10.533340 systemd-logind[1521]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:31:10.535281 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Sep 16 04:31:10.536160 systemd-logind[1521]: Removed session 2. Sep 16 04:31:10.589430 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:10.590621 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:10.594617 systemd-logind[1521]: New session 3 of user core. Sep 16 04:31:10.604235 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:31:10.651778 sshd[1682]: Connection closed by 10.0.0.1 port 40628 Sep 16 04:31:10.652255 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:10.667338 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:40628.service: Deactivated successfully. Sep 16 04:31:10.668906 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:31:10.671539 systemd-logind[1521]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:31:10.673569 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Sep 16 04:31:10.674523 systemd-logind[1521]: Removed session 3. Sep 16 04:31:10.738368 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:10.740184 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:10.744479 systemd-logind[1521]: New session 4 of user core. Sep 16 04:31:10.751209 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:31:10.805587 sshd[1691]: Connection closed by 10.0.0.1 port 40630 Sep 16 04:31:10.805455 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:10.816477 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:40630.service: Deactivated successfully. Sep 16 04:31:10.820074 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:31:10.821031 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:31:10.823897 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:40638.service - OpenSSH per-connection server daemon (10.0.0.1:40638). Sep 16 04:31:10.824727 systemd-logind[1521]: Removed session 4. Sep 16 04:31:10.878862 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:10.880109 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:10.884456 systemd-logind[1521]: New session 5 of user core. Sep 16 04:31:10.891210 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:31:10.948964 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:31:10.949238 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:31:10.964864 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 16 04:31:10.966359 sshd[1700]: Connection closed by 10.0.0.1 port 40638 Sep 16 04:31:10.966670 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:10.977314 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:40638.service: Deactivated successfully. Sep 16 04:31:10.978804 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:31:10.980620 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:31:10.985260 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:40650.service - OpenSSH per-connection server daemon (10.0.0.1:40650). Sep 16 04:31:10.985974 systemd-logind[1521]: Removed session 5. Sep 16 04:31:11.036706 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 40650 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:11.037977 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:11.042390 systemd-logind[1521]: New session 6 of user core. Sep 16 04:31:11.051193 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:31:11.104727 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:31:11.105040 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:31:11.123081 sudo[1712]: pam_unix(sudo:session): session closed for user root Sep 16 04:31:11.127976 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:31:11.128524 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:31:11.139345 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:31:11.170879 augenrules[1734]: No rules Sep 16 04:31:11.172068 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:31:11.173106 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:31:11.174261 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 16 04:31:11.176070 sshd[1710]: Connection closed by 10.0.0.1 port 40650 Sep 16 04:31:11.176119 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:11.184935 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:40650.service: Deactivated successfully. Sep 16 04:31:11.187384 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:31:11.188078 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:31:11.190487 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Sep 16 04:31:11.191020 systemd-logind[1521]: Removed session 6. Sep 16 04:31:11.250725 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:31:11.252761 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:11.256805 systemd-logind[1521]: New session 7 of user core. Sep 16 04:31:11.272170 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:31:11.324069 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:31:11.324647 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:31:11.605812 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:31:11.628405 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:31:11.829625 dockerd[1768]: time="2025-09-16T04:31:11.829564373Z" level=info msg="Starting up" Sep 16 04:31:11.830441 dockerd[1768]: time="2025-09-16T04:31:11.830419733Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:31:11.840307 dockerd[1768]: time="2025-09-16T04:31:11.840260741Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:31:11.852761 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2558914954-merged.mount: Deactivated successfully. Sep 16 04:31:11.961035 dockerd[1768]: time="2025-09-16T04:31:11.960742735Z" level=info msg="Loading containers: start." Sep 16 04:31:11.969033 kernel: Initializing XFRM netlink socket Sep 16 04:31:12.145274 systemd-networkd[1444]: docker0: Link UP Sep 16 04:31:12.148482 dockerd[1768]: time="2025-09-16T04:31:12.148443565Z" level=info msg="Loading containers: done." Sep 16 04:31:12.160917 dockerd[1768]: time="2025-09-16T04:31:12.160870240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:31:12.161075 dockerd[1768]: time="2025-09-16T04:31:12.160952469Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:31:12.161075 dockerd[1768]: time="2025-09-16T04:31:12.161056645Z" level=info msg="Initializing buildkit" Sep 16 04:31:12.181132 dockerd[1768]: time="2025-09-16T04:31:12.181093362Z" level=info msg="Completed buildkit initialization" Sep 16 04:31:12.185748 dockerd[1768]: time="2025-09-16T04:31:12.185710384Z" level=info msg="Daemon has completed initialization" Sep 16 04:31:12.185989 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:31:12.186793 dockerd[1768]: time="2025-09-16T04:31:12.185802438Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:31:12.814014 containerd[1535]: time="2025-09-16T04:31:12.813959168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:31:12.850204 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3957477567-merged.mount: Deactivated successfully. Sep 16 04:31:13.351807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325222140.mount: Deactivated successfully. Sep 16 04:31:14.743991 containerd[1535]: time="2025-09-16T04:31:14.743348116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:14.743991 containerd[1535]: time="2025-09-16T04:31:14.743748043Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 16 04:31:14.744709 containerd[1535]: time="2025-09-16T04:31:14.744681823Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:14.747075 containerd[1535]: time="2025-09-16T04:31:14.747039814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:14.748070 containerd[1535]: time="2025-09-16T04:31:14.748035873Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.933962557s" Sep 16 04:31:14.748119 containerd[1535]: time="2025-09-16T04:31:14.748077325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 16 04:31:14.748826 containerd[1535]: time="2025-09-16T04:31:14.748802316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:31:15.845339 containerd[1535]: time="2025-09-16T04:31:15.845277154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:15.845965 containerd[1535]: time="2025-09-16T04:31:15.845925500Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 16 04:31:15.846590 containerd[1535]: time="2025-09-16T04:31:15.846549537Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:15.849338 containerd[1535]: time="2025-09-16T04:31:15.849306292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:15.850219 containerd[1535]: time="2025-09-16T04:31:15.850187843Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.101356584s" Sep 16 04:31:15.850254 containerd[1535]: time="2025-09-16T04:31:15.850220108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 16 04:31:15.850950 containerd[1535]: time="2025-09-16T04:31:15.850632557Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:31:17.096533 containerd[1535]: time="2025-09-16T04:31:17.096485405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:17.097404 containerd[1535]: time="2025-09-16T04:31:17.097373174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 16 04:31:17.098503 containerd[1535]: time="2025-09-16T04:31:17.098048494Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:17.101061 containerd[1535]: time="2025-09-16T04:31:17.101029655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:17.102040 containerd[1535]: time="2025-09-16T04:31:17.102014839Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.251351433s" Sep 16 04:31:17.102131 containerd[1535]: time="2025-09-16T04:31:17.102117472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 16 04:31:17.102766 containerd[1535]: time="2025-09-16T04:31:17.102739890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:31:18.086064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount982659210.mount: Deactivated successfully. Sep 16 04:31:18.282432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:31:18.284118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:18.532039 containerd[1535]: time="2025-09-16T04:31:18.531804343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:18.532583 containerd[1535]: time="2025-09-16T04:31:18.532197405Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 16 04:31:18.533056 containerd[1535]: time="2025-09-16T04:31:18.532988665Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:18.535103 containerd[1535]: time="2025-09-16T04:31:18.535038870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:18.535465 containerd[1535]: time="2025-09-16T04:31:18.535436104Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.432663626s" Sep 16 04:31:18.535465 containerd[1535]: time="2025-09-16T04:31:18.535460658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 16 04:31:18.535962 containerd[1535]: time="2025-09-16T04:31:18.535935446Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:31:18.552522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:18.556364 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:31:18.592767 kubelet[2070]: E0916 04:31:18.592699 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:31:18.595772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:31:18.595906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:31:18.596416 systemd[1]: kubelet.service: Consumed 141ms CPU time, 108.1M memory peak. Sep 16 04:31:19.056626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603097190.mount: Deactivated successfully. Sep 16 04:31:19.843538 containerd[1535]: time="2025-09-16T04:31:19.843491806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:19.844440 containerd[1535]: time="2025-09-16T04:31:19.844407656Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 16 04:31:19.845116 containerd[1535]: time="2025-09-16T04:31:19.845072005Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:19.848091 containerd[1535]: time="2025-09-16T04:31:19.848044148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:19.851308 containerd[1535]: time="2025-09-16T04:31:19.851081262Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.31511204s" Sep 16 04:31:19.851308 containerd[1535]: time="2025-09-16T04:31:19.851127143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:31:19.852016 containerd[1535]: time="2025-09-16T04:31:19.851599507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:31:20.253020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567773611.mount: Deactivated successfully. Sep 16 04:31:20.257439 containerd[1535]: time="2025-09-16T04:31:20.257374135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:31:20.258123 containerd[1535]: time="2025-09-16T04:31:20.258094194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 16 04:31:20.258952 containerd[1535]: time="2025-09-16T04:31:20.258912118Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:31:20.261319 containerd[1535]: time="2025-09-16T04:31:20.261279492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:31:20.262284 containerd[1535]: time="2025-09-16T04:31:20.262248445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 410.369566ms" Sep 16 04:31:20.262284 containerd[1535]: time="2025-09-16T04:31:20.262280438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:31:20.262747 containerd[1535]: time="2025-09-16T04:31:20.262718247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:31:20.725484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696102518.mount: Deactivated successfully. Sep 16 04:31:22.741866 containerd[1535]: time="2025-09-16T04:31:22.741802747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:22.742988 containerd[1535]: time="2025-09-16T04:31:22.742945764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 16 04:31:22.743259 containerd[1535]: time="2025-09-16T04:31:22.743223334Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:22.746546 containerd[1535]: time="2025-09-16T04:31:22.746485011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:22.748495 containerd[1535]: time="2025-09-16T04:31:22.748456690Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.485604659s" Sep 16 04:31:22.748538 containerd[1535]: time="2025-09-16T04:31:22.748493956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 16 04:31:27.211394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:27.211534 systemd[1]: kubelet.service: Consumed 141ms CPU time, 108.1M memory peak. Sep 16 04:31:27.213250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:27.231102 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Sep 16 04:31:27.231203 systemd[1]: Reloading... Sep 16 04:31:27.290067 zram_generator::config[2262]: No configuration found. Sep 16 04:31:27.456122 systemd[1]: Reloading finished in 224 ms. Sep 16 04:31:27.505368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:27.507338 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:31:27.507658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:27.507765 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.3M memory peak. Sep 16 04:31:27.510121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:27.617566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:27.620889 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:31:27.651385 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:31:27.651385 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:31:27.651385 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:31:27.652694 kubelet[2309]: I0916 04:31:27.651978 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:31:28.371026 kubelet[2309]: I0916 04:31:28.370174 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:31:28.371026 kubelet[2309]: I0916 04:31:28.370203 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:31:28.371026 kubelet[2309]: I0916 04:31:28.370478 2309 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:31:28.396513 kubelet[2309]: E0916 04:31:28.396480 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:31:28.397419 kubelet[2309]: I0916 04:31:28.397391 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:31:28.403581 kubelet[2309]: I0916 04:31:28.403560 2309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:31:28.406197 kubelet[2309]: I0916 04:31:28.406163 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:31:28.407309 kubelet[2309]: I0916 04:31:28.407260 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:31:28.407484 kubelet[2309]: I0916 04:31:28.407301 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:31:28.407575 kubelet[2309]: I0916 04:31:28.407548 2309 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:31:28.407575 kubelet[2309]: I0916 04:31:28.407558 2309 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:31:28.407742 kubelet[2309]: I0916 04:31:28.407727 2309 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:31:28.410043 kubelet[2309]: I0916 04:31:28.410019 2309 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:31:28.410043 kubelet[2309]: I0916 04:31:28.410041 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:31:28.410105 kubelet[2309]: I0916 04:31:28.410065 2309 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:31:28.410105 kubelet[2309]: I0916 04:31:28.410087 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:31:28.413661 kubelet[2309]: W0916 04:31:28.413523 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Sep 16 04:31:28.413661 kubelet[2309]: E0916 04:31:28.413580 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:31:28.413832 kubelet[2309]: I0916 04:31:28.413816 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:31:28.414432 kubelet[2309]: I0916 04:31:28.414415 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:31:28.414665 kubelet[2309]: W0916 04:31:28.414653 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:31:28.414830 kubelet[2309]: W0916 04:31:28.414770 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Sep 16 04:31:28.414872 kubelet[2309]: E0916 04:31:28.414850 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:31:28.415873 kubelet[2309]: I0916 04:31:28.415854 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:31:28.416557 kubelet[2309]: I0916 04:31:28.416542 2309 server.go:1287] "Started kubelet" Sep 16 04:31:28.416902 kubelet[2309]: I0916 04:31:28.416860 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:31:28.418495 kubelet[2309]: I0916 04:31:28.418474 2309 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:31:28.420623 kubelet[2309]: I0916 04:31:28.420535 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:31:28.421797 kubelet[2309]: I0916 04:31:28.421776 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:31:28.422050 kubelet[2309]: I0916 04:31:28.421984 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:31:28.422219 kubelet[2309]: I0916 04:31:28.422200 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:31:28.422452 kubelet[2309]: E0916 04:31:28.422357 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:31:28.422452 kubelet[2309]: I0916 04:31:28.422386 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:31:28.422609 kubelet[2309]: E0916 04:31:28.422066 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865a909a33cdcdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:31:28.416521437 +0000 UTC m=+0.793001918,LastTimestamp:2025-09-16 04:31:28.416521437 +0000 UTC m=+0.793001918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:31:28.422609 kubelet[2309]: I0916 04:31:28.422546 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:31:28.422609 kubelet[2309]: I0916 04:31:28.422596 2309 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:31:28.423956 kubelet[2309]: W0916 04:31:28.422843 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Sep 16 04:31:28.423956 kubelet[2309]: E0916 04:31:28.422889 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:31:28.423956 kubelet[2309]: E0916 04:31:28.423199 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Sep 16 04:31:28.423956 kubelet[2309]: I0916 04:31:28.423214 2309 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:31:28.423956 kubelet[2309]: I0916 04:31:28.423290 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:31:28.424370 kubelet[2309]: E0916 04:31:28.424350 2309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:31:28.424421 kubelet[2309]: I0916 04:31:28.424397 2309 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:31:28.435319 kubelet[2309]: I0916 04:31:28.434718 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:31:28.435319 kubelet[2309]: I0916 04:31:28.435321 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:31:28.435400 kubelet[2309]: I0916 04:31:28.435347 2309 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:31:28.436605 kubelet[2309]: I0916 04:31:28.436572 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:31:28.437729 kubelet[2309]: I0916 04:31:28.437698 2309 policy_none.go:49] "None policy: Start" Sep 16 04:31:28.437729 kubelet[2309]: I0916 04:31:28.437723 2309 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:31:28.437729 kubelet[2309]: I0916 04:31:28.437734 2309 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:31:28.437903 kubelet[2309]: I0916 04:31:28.437886 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:31:28.437975 kubelet[2309]: I0916 04:31:28.437965 2309 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:31:28.438123 kubelet[2309]: I0916 04:31:28.438108 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:31:28.438224 kubelet[2309]: I0916 04:31:28.438212 2309 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:31:28.438309 kubelet[2309]: E0916 04:31:28.438294 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:31:28.438853 kubelet[2309]: W0916 04:31:28.438818 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Sep 16 04:31:28.439027 kubelet[2309]: E0916 04:31:28.438989 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:31:28.442429 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:31:28.461138 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:31:28.463865 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:31:28.486707 kubelet[2309]: I0916 04:31:28.486684 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:31:28.487024 kubelet[2309]: I0916 04:31:28.486870 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:31:28.487024 kubelet[2309]: I0916 04:31:28.486886 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:31:28.487173 kubelet[2309]: I0916 04:31:28.487151 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:31:28.487847 kubelet[2309]: E0916 04:31:28.487810 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:31:28.487903 kubelet[2309]: E0916 04:31:28.487871 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:31:28.547373 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 16 04:31:28.583174 kubelet[2309]: E0916 04:31:28.583138 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:28.586364 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 16 04:31:28.587847 kubelet[2309]: I0916 04:31:28.587821 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:31:28.588256 kubelet[2309]: E0916 04:31:28.588231 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Sep 16 04:31:28.599032 kubelet[2309]: E0916 04:31:28.598988 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:28.601220 systemd[1]: Created slice kubepods-burstable-pod762611bf5d28ac5b2b7a323ac37ffec4.slice - libcontainer container kubepods-burstable-pod762611bf5d28ac5b2b7a323ac37ffec4.slice. Sep 16 04:31:28.602840 kubelet[2309]: E0916 04:31:28.602820 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:28.624382 kubelet[2309]: I0916 04:31:28.624186 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:28.624382 kubelet[2309]: I0916 04:31:28.624216 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:28.624382 kubelet[2309]: I0916 04:31:28.624235 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:28.624382 kubelet[2309]: I0916 04:31:28.624249 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:28.624382 kubelet[2309]: I0916 04:31:28.624271 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:28.624527 kubelet[2309]: I0916 04:31:28.624286 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:28.624527 kubelet[2309]: E0916 04:31:28.624285 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Sep 16 04:31:28.624527 kubelet[2309]: I0916 04:31:28.624306 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:28.624527 kubelet[2309]: I0916 04:31:28.624322 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:28.624527 kubelet[2309]: I0916 04:31:28.624336 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:28.789957 kubelet[2309]: I0916 04:31:28.789926 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:31:28.790389 kubelet[2309]: E0916 04:31:28.790284 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Sep 16 04:31:28.884115 kubelet[2309]: E0916 04:31:28.884019 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:28.885097 containerd[1535]: time="2025-09-16T04:31:28.884685255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:28.900104 kubelet[2309]: E0916 04:31:28.900004 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:28.900561 containerd[1535]: time="2025-09-16T04:31:28.900518808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:28.900812 containerd[1535]: time="2025-09-16T04:31:28.900681257Z" level=info msg="connecting to shim 90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a" address="unix:///run/containerd/s/5b4b6e09b1d1c45acd86ad9560322be553e480ced9be96ffe8b2bce3a014d2b3" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:28.905872 kubelet[2309]: E0916 04:31:28.905849 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:28.906471 containerd[1535]: time="2025-09-16T04:31:28.906363082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:762611bf5d28ac5b2b7a323ac37ffec4,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:28.921131 systemd[1]: Started cri-containerd-90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a.scope - libcontainer container 90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a. Sep 16 04:31:28.925958 containerd[1535]: time="2025-09-16T04:31:28.925918226Z" level=info msg="connecting to shim d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60" address="unix:///run/containerd/s/d7047e7109641241a8f6f3c2339e357aa4fea29062d098f288fea1186c3e7b84" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:28.936201 containerd[1535]: time="2025-09-16T04:31:28.936166151Z" level=info msg="connecting to shim c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e" address="unix:///run/containerd/s/12a66b191d271051d71ba15128c31fd2940ffd9c2c3a7fc91f9b76223cff98d2" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:28.951136 systemd[1]: Started cri-containerd-d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60.scope - libcontainer container d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60. Sep 16 04:31:28.954518 systemd[1]: Started cri-containerd-c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e.scope - libcontainer container c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e. Sep 16 04:31:28.965597 containerd[1535]: time="2025-09-16T04:31:28.965564979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a\"" Sep 16 04:31:28.968119 kubelet[2309]: E0916 04:31:28.968095 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:28.970215 containerd[1535]: time="2025-09-16T04:31:28.970191127Z" level=info msg="CreateContainer within sandbox \"90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:31:28.980312 containerd[1535]: time="2025-09-16T04:31:28.979960473Z" level=info msg="Container c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:28.992916 containerd[1535]: time="2025-09-16T04:31:28.992864984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60\"" Sep 16 04:31:28.993036 containerd[1535]: time="2025-09-16T04:31:28.992978073Z" level=info msg="CreateContainer within sandbox \"90f7a499c941fd14b1e17d12f96a5ed7b6b2e0b57e43c8eef0b19ffb824c444a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e\"" Sep 16 04:31:28.993590 containerd[1535]: time="2025-09-16T04:31:28.993559895Z" level=info msg="StartContainer for \"c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e\"" Sep 16 04:31:28.993959 kubelet[2309]: E0916 04:31:28.993793 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:28.994599 containerd[1535]: time="2025-09-16T04:31:28.994565532Z" level=info msg="connecting to shim c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e" address="unix:///run/containerd/s/5b4b6e09b1d1c45acd86ad9560322be553e480ced9be96ffe8b2bce3a014d2b3" protocol=ttrpc version=3 Sep 16 04:31:28.995135 containerd[1535]: time="2025-09-16T04:31:28.995108883Z" level=info msg="CreateContainer within sandbox \"d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:31:29.007860 containerd[1535]: time="2025-09-16T04:31:29.007579278Z" level=info msg="Container 15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:29.008135 containerd[1535]: time="2025-09-16T04:31:29.008108885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:762611bf5d28ac5b2b7a323ac37ffec4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e\"" Sep 16 04:31:29.009461 kubelet[2309]: E0916 04:31:29.009439 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:29.011461 containerd[1535]: time="2025-09-16T04:31:29.011178174Z" level=info msg="CreateContainer within sandbox \"c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:31:29.011271 systemd[1]: Started cri-containerd-c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e.scope - libcontainer container c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e. Sep 16 04:31:29.012592 containerd[1535]: time="2025-09-16T04:31:29.012563295Z" level=info msg="CreateContainer within sandbox \"d265ffcb5fe9b70c4c931e064931147c3ce9dca0f6aa8d8671789f6ad9699d60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc\"" Sep 16 04:31:29.013070 containerd[1535]: time="2025-09-16T04:31:29.013048072Z" level=info msg="StartContainer for \"15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc\"" Sep 16 04:31:29.014058 containerd[1535]: time="2025-09-16T04:31:29.013915994Z" level=info msg="connecting to shim 15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc" address="unix:///run/containerd/s/d7047e7109641241a8f6f3c2339e357aa4fea29062d098f288fea1186c3e7b84" protocol=ttrpc version=3 Sep 16 04:31:29.018681 containerd[1535]: time="2025-09-16T04:31:29.018616495Z" level=info msg="Container fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:29.025149 kubelet[2309]: E0916 04:31:29.024768 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Sep 16 04:31:29.026822 containerd[1535]: time="2025-09-16T04:31:29.026785363Z" level=info msg="CreateContainer within sandbox \"c161b23663cb919408697768a53c3fd5367644b16e426304e2c77e7cfe24cc3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0\"" Sep 16 04:31:29.027458 containerd[1535]: time="2025-09-16T04:31:29.027261133Z" level=info msg="StartContainer for \"fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0\"" Sep 16 04:31:29.028877 containerd[1535]: time="2025-09-16T04:31:29.028344244Z" level=info msg="connecting to shim fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0" address="unix:///run/containerd/s/12a66b191d271051d71ba15128c31fd2940ffd9c2c3a7fc91f9b76223cff98d2" protocol=ttrpc version=3 Sep 16 04:31:29.036160 systemd[1]: Started cri-containerd-15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc.scope - libcontainer container 15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc. Sep 16 04:31:29.049154 systemd[1]: Started cri-containerd-fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0.scope - libcontainer container fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0. Sep 16 04:31:29.057255 containerd[1535]: time="2025-09-16T04:31:29.057221720Z" level=info msg="StartContainer for \"c0b47690e703149514723de5fbbe52cc2bf9aa23c885e816a5f03f2eaab0da2e\" returns successfully" Sep 16 04:31:29.097985 containerd[1535]: time="2025-09-16T04:31:29.097892537Z" level=info msg="StartContainer for \"15c12d5e8c9abcfaf46f4c0842658ac9c1bad21f1c6dfb8dbb7d5f732c4785bc\" returns successfully" Sep 16 04:31:29.100083 containerd[1535]: time="2025-09-16T04:31:29.099899009Z" level=info msg="StartContainer for \"fce5f45b9c6cb6e00d0eb9d59c2d0510d3719deb188caa04f17d3e30fb5b7ac0\" returns successfully" Sep 16 04:31:29.192352 kubelet[2309]: I0916 04:31:29.191607 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:31:29.444513 kubelet[2309]: E0916 04:31:29.444269 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:29.444513 kubelet[2309]: E0916 04:31:29.444398 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:29.446129 kubelet[2309]: E0916 04:31:29.446091 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:29.446451 kubelet[2309]: E0916 04:31:29.446431 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:29.449017 kubelet[2309]: E0916 04:31:29.448444 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:29.449017 kubelet[2309]: E0916 04:31:29.448560 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:30.451274 kubelet[2309]: E0916 04:31:30.451149 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:30.451674 kubelet[2309]: E0916 04:31:30.451571 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:30.451853 kubelet[2309]: E0916 04:31:30.451832 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 16 04:31:30.451970 kubelet[2309]: E0916 04:31:30.451956 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:30.626911 kubelet[2309]: E0916 04:31:30.626864 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 16 04:31:30.721370 kubelet[2309]: I0916 04:31:30.721242 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 04:31:30.723424 kubelet[2309]: I0916 04:31:30.723387 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:30.779475 kubelet[2309]: E0916 04:31:30.779440 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:30.779475 kubelet[2309]: I0916 04:31:30.779472 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:30.781625 kubelet[2309]: E0916 04:31:30.781157 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:30.781625 kubelet[2309]: I0916 04:31:30.781183 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:30.782664 kubelet[2309]: E0916 04:31:30.782603 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:31.414280 kubelet[2309]: I0916 04:31:31.414208 2309 apiserver.go:52] "Watching apiserver" Sep 16 04:31:31.423510 kubelet[2309]: I0916 04:31:31.423458 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:31:32.555865 systemd[1]: Reload requested from client PID 2587 ('systemctl') (unit session-7.scope)... Sep 16 04:31:32.555879 systemd[1]: Reloading... Sep 16 04:31:32.620035 zram_generator::config[2630]: No configuration found. Sep 16 04:31:32.836820 systemd[1]: Reloading finished in 280 ms. Sep 16 04:31:32.868240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:32.884768 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:31:32.884988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:32.885077 systemd[1]: kubelet.service: Consumed 1.135s CPU time, 130.3M memory peak. Sep 16 04:31:32.886682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:31:33.010920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:31:33.014239 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:31:33.057656 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:31:33.057656 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:31:33.057656 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:31:33.058046 kubelet[2672]: I0916 04:31:33.057719 2672 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:31:33.064092 kubelet[2672]: I0916 04:31:33.064058 2672 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:31:33.064092 kubelet[2672]: I0916 04:31:33.064086 2672 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:31:33.064373 kubelet[2672]: I0916 04:31:33.064355 2672 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:31:33.065552 kubelet[2672]: I0916 04:31:33.065528 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:31:33.067879 kubelet[2672]: I0916 04:31:33.067783 2672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:31:33.071672 kubelet[2672]: I0916 04:31:33.071654 2672 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:31:33.074380 kubelet[2672]: I0916 04:31:33.074356 2672 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:31:33.074571 kubelet[2672]: I0916 04:31:33.074544 2672 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:31:33.074817 kubelet[2672]: I0916 04:31:33.074570 2672 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:31:33.074883 kubelet[2672]: I0916 04:31:33.074827 2672 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:31:33.074883 kubelet[2672]: I0916 04:31:33.074836 2672 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:31:33.074883 kubelet[2672]: I0916 04:31:33.074876 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:31:33.075031 kubelet[2672]: I0916 04:31:33.075019 2672 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:31:33.075067 kubelet[2672]: I0916 04:31:33.075034 2672 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:31:33.075067 kubelet[2672]: I0916 04:31:33.075054 2672 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:31:33.075067 kubelet[2672]: I0916 04:31:33.075063 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:31:33.076830 kubelet[2672]: I0916 04:31:33.076086 2672 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:31:33.076830 kubelet[2672]: I0916 04:31:33.076649 2672 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:31:33.080149 kubelet[2672]: I0916 04:31:33.078833 2672 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:31:33.080149 kubelet[2672]: I0916 04:31:33.078871 2672 server.go:1287] "Started kubelet" Sep 16 04:31:33.080149 kubelet[2672]: I0916 04:31:33.079534 2672 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:31:33.080149 kubelet[2672]: I0916 04:31:33.079866 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:31:33.083453 kubelet[2672]: I0916 04:31:33.083431 2672 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:31:33.084146 kubelet[2672]: I0916 04:31:33.084119 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:31:33.084394 kubelet[2672]: I0916 04:31:33.084370 2672 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:31:33.085128 kubelet[2672]: I0916 04:31:33.085100 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:31:33.091847 kubelet[2672]: E0916 04:31:33.085280 2672 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:31:33.091847 kubelet[2672]: I0916 04:31:33.085312 2672 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:31:33.091847 kubelet[2672]: I0916 04:31:33.085322 2672 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:31:33.091847 kubelet[2672]: I0916 04:31:33.091640 2672 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:31:33.091847 kubelet[2672]: I0916 04:31:33.091723 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:31:33.093046 kubelet[2672]: I0916 04:31:33.091731 2672 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:31:33.098625 kubelet[2672]: E0916 04:31:33.098512 2672 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:31:33.098625 kubelet[2672]: I0916 04:31:33.098620 2672 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:31:33.100567 kubelet[2672]: I0916 04:31:33.100531 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:31:33.101763 kubelet[2672]: I0916 04:31:33.101734 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:31:33.101763 kubelet[2672]: I0916 04:31:33.101756 2672 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:31:33.101846 kubelet[2672]: I0916 04:31:33.101774 2672 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:31:33.101846 kubelet[2672]: I0916 04:31:33.101782 2672 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:31:33.101846 kubelet[2672]: E0916 04:31:33.101821 2672 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:31:33.128831 kubelet[2672]: I0916 04:31:33.128803 2672 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:31:33.128831 kubelet[2672]: I0916 04:31:33.128823 2672 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:31:33.128959 kubelet[2672]: I0916 04:31:33.128843 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:31:33.129024 kubelet[2672]: I0916 04:31:33.128990 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:31:33.129053 kubelet[2672]: I0916 04:31:33.129022 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:31:33.129053 kubelet[2672]: I0916 04:31:33.129038 2672 policy_none.go:49] "None policy: Start" Sep 16 04:31:33.129053 kubelet[2672]: I0916 04:31:33.129046 2672 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:31:33.129053 kubelet[2672]: I0916 04:31:33.129055 2672 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:31:33.129155 kubelet[2672]: I0916 04:31:33.129144 2672 state_mem.go:75] "Updated machine memory state" Sep 16 04:31:33.133157 kubelet[2672]: I0916 04:31:33.132502 2672 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:31:33.133157 kubelet[2672]: I0916 04:31:33.132653 2672 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:31:33.133157 kubelet[2672]: I0916 04:31:33.132665 2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:31:33.133157 kubelet[2672]: I0916 04:31:33.132835 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:31:33.134054 kubelet[2672]: E0916 04:31:33.134034 2672 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:31:33.202863 kubelet[2672]: I0916 04:31:33.202816 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:33.203152 kubelet[2672]: I0916 04:31:33.203123 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.203508 kubelet[2672]: I0916 04:31:33.203493 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:33.236124 kubelet[2672]: I0916 04:31:33.236101 2672 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 16 04:31:33.242206 kubelet[2672]: I0916 04:31:33.242170 2672 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 16 04:31:33.242312 kubelet[2672]: I0916 04:31:33.242247 2672 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 16 04:31:33.293536 kubelet[2672]: I0916 04:31:33.293488 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.293536 kubelet[2672]: I0916 04:31:33.293526 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.293677 kubelet[2672]: I0916 04:31:33.293549 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:33.293677 kubelet[2672]: I0916 04:31:33.293577 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:33.293677 kubelet[2672]: I0916 04:31:33.293594 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:33.293677 kubelet[2672]: I0916 04:31:33.293609 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/762611bf5d28ac5b2b7a323ac37ffec4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"762611bf5d28ac5b2b7a323ac37ffec4\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:31:33.293677 kubelet[2672]: I0916 04:31:33.293628 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.293798 kubelet[2672]: I0916 04:31:33.293655 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.293798 kubelet[2672]: I0916 04:31:33.293673 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:31:33.511647 kubelet[2672]: E0916 04:31:33.511451 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:33.511647 kubelet[2672]: E0916 04:31:33.511512 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:33.511647 kubelet[2672]: E0916 04:31:33.511535 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:33.552288 sudo[2707]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:31:33.552554 sudo[2707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:31:33.861978 sudo[2707]: pam_unix(sudo:session): session closed for user root Sep 16 04:31:34.076677 kubelet[2672]: I0916 04:31:34.076610 2672 apiserver.go:52] "Watching apiserver" Sep 16 04:31:34.092516 kubelet[2672]: I0916 04:31:34.092489 2672 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:31:34.117728 kubelet[2672]: E0916 04:31:34.117304 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:34.117728 kubelet[2672]: E0916 04:31:34.117395 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:34.117728 kubelet[2672]: I0916 04:31:34.117525 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:34.126231 kubelet[2672]: E0916 04:31:34.126191 2672 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 04:31:34.126385 kubelet[2672]: E0916 04:31:34.126363 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:34.185618 kubelet[2672]: I0916 04:31:34.184840 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.184825388 podStartE2EDuration="1.184825388s" podCreationTimestamp="2025-09-16 04:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:31:34.184556492 +0000 UTC m=+1.166837471" watchObservedRunningTime="2025-09-16 04:31:34.184825388 +0000 UTC m=+1.167106327" Sep 16 04:31:34.202450 kubelet[2672]: I0916 04:31:34.202159 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2021424330000001 podStartE2EDuration="1.202142433s" podCreationTimestamp="2025-09-16 04:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:31:34.194389273 +0000 UTC m=+1.176670292" watchObservedRunningTime="2025-09-16 04:31:34.202142433 +0000 UTC m=+1.184423412" Sep 16 04:31:34.210373 kubelet[2672]: I0916 04:31:34.210325 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.21031026 podStartE2EDuration="1.21031026s" podCreationTimestamp="2025-09-16 04:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:31:34.202115343 +0000 UTC m=+1.184396322" watchObservedRunningTime="2025-09-16 04:31:34.21031026 +0000 UTC m=+1.192591239" Sep 16 04:31:35.118744 kubelet[2672]: E0916 04:31:35.118717 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:35.119108 kubelet[2672]: E0916 04:31:35.118792 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:35.690571 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 16 04:31:35.691756 sshd[1746]: Connection closed by 10.0.0.1 port 40656 Sep 16 04:31:35.692251 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:35.695981 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:40656.service: Deactivated successfully. Sep 16 04:31:35.698774 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:31:35.699042 systemd[1]: session-7.scope: Consumed 6.699s CPU time, 260.3M memory peak. Sep 16 04:31:35.700231 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:31:35.701572 systemd-logind[1521]: Removed session 7. Sep 16 04:31:36.035172 kubelet[2672]: E0916 04:31:36.035092 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:37.969730 kubelet[2672]: E0916 04:31:37.969687 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:39.596853 kubelet[2672]: I0916 04:31:39.596807 2672 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:31:39.597580 kubelet[2672]: I0916 04:31:39.597550 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:31:39.597645 containerd[1535]: time="2025-09-16T04:31:39.597277830Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:31:40.475565 systemd[1]: Created slice kubepods-besteffort-podcb4c4846_ab94_47a0_b61d_b66093d62984.slice - libcontainer container kubepods-besteffort-podcb4c4846_ab94_47a0_b61d_b66093d62984.slice. Sep 16 04:31:40.501922 systemd[1]: Created slice kubepods-burstable-pod8c0c8707_79a9_4174_8420_7251c2494f83.slice - libcontainer container kubepods-burstable-pod8c0c8707_79a9_4174_8420_7251c2494f83.slice. Sep 16 04:31:40.545264 kubelet[2672]: I0916 04:31:40.545223 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-bpf-maps\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545264 kubelet[2672]: I0916 04:31:40.545260 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-lib-modules\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545424 kubelet[2672]: I0916 04:31:40.545284 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c0c8707-79a9-4174-8420-7251c2494f83-clustermesh-secrets\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545424 kubelet[2672]: I0916 04:31:40.545299 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2bb\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-kube-api-access-bp2bb\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545424 kubelet[2672]: I0916 04:31:40.545321 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb4c4846-ab94-47a0-b61d-b66093d62984-xtables-lock\") pod \"kube-proxy-9p45s\" (UID: \"cb4c4846-ab94-47a0-b61d-b66093d62984\") " pod="kube-system/kube-proxy-9p45s" Sep 16 04:31:40.545424 kubelet[2672]: I0916 04:31:40.545337 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqtxf\" (UniqueName: \"kubernetes.io/projected/cb4c4846-ab94-47a0-b61d-b66093d62984-kube-api-access-tqtxf\") pod \"kube-proxy-9p45s\" (UID: \"cb4c4846-ab94-47a0-b61d-b66093d62984\") " pod="kube-system/kube-proxy-9p45s" Sep 16 04:31:40.545424 kubelet[2672]: I0916 04:31:40.545359 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-etc-cni-netd\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545377 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-xtables-lock\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545391 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-hubble-tls\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545407 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb4c4846-ab94-47a0-b61d-b66093d62984-lib-modules\") pod \"kube-proxy-9p45s\" (UID: \"cb4c4846-ab94-47a0-b61d-b66093d62984\") " pod="kube-system/kube-proxy-9p45s" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545421 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cni-path\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545437 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-kernel\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545522 kubelet[2672]: I0916 04:31:40.545452 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-hostproc\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545631 kubelet[2672]: I0916 04:31:40.545468 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb4c4846-ab94-47a0-b61d-b66093d62984-kube-proxy\") pod \"kube-proxy-9p45s\" (UID: \"cb4c4846-ab94-47a0-b61d-b66093d62984\") " pod="kube-system/kube-proxy-9p45s" Sep 16 04:31:40.545631 kubelet[2672]: I0916 04:31:40.545483 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-cgroup\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545631 kubelet[2672]: I0916 04:31:40.545501 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-run\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545631 kubelet[2672]: I0916 04:31:40.545515 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-config-path\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.545631 kubelet[2672]: I0916 04:31:40.545534 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-net\") pod \"cilium-4c28d\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " pod="kube-system/cilium-4c28d" Sep 16 04:31:40.736305 systemd[1]: Created slice kubepods-besteffort-pod72cdc903_14e7_4d78_9259_398f38492d72.slice - libcontainer container kubepods-besteffort-pod72cdc903_14e7_4d78_9259_398f38492d72.slice. Sep 16 04:31:40.748020 kubelet[2672]: I0916 04:31:40.746751 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klblc\" (UniqueName: \"kubernetes.io/projected/72cdc903-14e7-4d78-9259-398f38492d72-kube-api-access-klblc\") pod \"cilium-operator-6c4d7847fc-kt7n4\" (UID: \"72cdc903-14e7-4d78-9259-398f38492d72\") " pod="kube-system/cilium-operator-6c4d7847fc-kt7n4" Sep 16 04:31:40.748020 kubelet[2672]: I0916 04:31:40.746813 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72cdc903-14e7-4d78-9259-398f38492d72-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kt7n4\" (UID: \"72cdc903-14e7-4d78-9259-398f38492d72\") " pod="kube-system/cilium-operator-6c4d7847fc-kt7n4" Sep 16 04:31:40.800152 kubelet[2672]: E0916 04:31:40.800101 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:40.800888 containerd[1535]: time="2025-09-16T04:31:40.800854017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9p45s,Uid:cb4c4846-ab94-47a0-b61d-b66093d62984,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:40.804895 kubelet[2672]: E0916 04:31:40.804870 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:40.806440 containerd[1535]: time="2025-09-16T04:31:40.806400537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4c28d,Uid:8c0c8707-79a9-4174-8420-7251c2494f83,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:40.823772 containerd[1535]: time="2025-09-16T04:31:40.823714815Z" level=info msg="connecting to shim 74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158" address="unix:///run/containerd/s/52ceb526ab7318dddd50706504fe1fef0d6fcd71152652e34b89a11b8f5ddec0" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:40.829780 containerd[1535]: time="2025-09-16T04:31:40.829721135Z" level=info msg="connecting to shim 5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:40.847153 systemd[1]: Started cri-containerd-74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158.scope - libcontainer container 74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158. Sep 16 04:31:40.851065 systemd[1]: Started cri-containerd-5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d.scope - libcontainer container 5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d. Sep 16 04:31:40.883105 containerd[1535]: time="2025-09-16T04:31:40.883065331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4c28d,Uid:8c0c8707-79a9-4174-8420-7251c2494f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\"" Sep 16 04:31:40.883832 kubelet[2672]: E0916 04:31:40.883811 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:40.885642 containerd[1535]: time="2025-09-16T04:31:40.885604011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9p45s,Uid:cb4c4846-ab94-47a0-b61d-b66093d62984,Namespace:kube-system,Attempt:0,} returns sandbox id \"74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158\"" Sep 16 04:31:40.886847 kubelet[2672]: E0916 04:31:40.886585 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:40.888522 containerd[1535]: time="2025-09-16T04:31:40.887963659Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:31:40.892410 containerd[1535]: time="2025-09-16T04:31:40.892369062Z" level=info msg="CreateContainer within sandbox \"74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:31:40.900666 containerd[1535]: time="2025-09-16T04:31:40.900625971Z" level=info msg="Container 74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:40.907203 containerd[1535]: time="2025-09-16T04:31:40.907162743Z" level=info msg="CreateContainer within sandbox \"74d092c0ba5817e7fc9f5507c5b1babe7bd492fe4267d91ee38d78c9a77cc158\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c\"" Sep 16 04:31:40.909027 containerd[1535]: time="2025-09-16T04:31:40.908797826Z" level=info msg="StartContainer for \"74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c\"" Sep 16 04:31:40.910668 containerd[1535]: time="2025-09-16T04:31:40.910638905Z" level=info msg="connecting to shim 74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c" address="unix:///run/containerd/s/52ceb526ab7318dddd50706504fe1fef0d6fcd71152652e34b89a11b8f5ddec0" protocol=ttrpc version=3 Sep 16 04:31:40.938196 systemd[1]: Started cri-containerd-74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c.scope - libcontainer container 74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c. Sep 16 04:31:40.973043 containerd[1535]: time="2025-09-16T04:31:40.973006263Z" level=info msg="StartContainer for \"74131dcae409236c451ba1c1168d6365e2092e72277fd70bdddaa935d867e48c\" returns successfully" Sep 16 04:31:41.041799 kubelet[2672]: E0916 04:31:41.041758 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:41.043353 containerd[1535]: time="2025-09-16T04:31:41.043222020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kt7n4,Uid:72cdc903-14e7-4d78-9259-398f38492d72,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:41.058953 containerd[1535]: time="2025-09-16T04:31:41.058911148Z" level=info msg="connecting to shim c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3" address="unix:///run/containerd/s/c458824c38cdcee222435c6d207e737e6448c7913e37cd6f6e1d1a8c470c090d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:41.091765 systemd[1]: Started cri-containerd-c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3.scope - libcontainer container c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3. Sep 16 04:31:41.135516 kubelet[2672]: E0916 04:31:41.135137 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:41.142980 containerd[1535]: time="2025-09-16T04:31:41.142935300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kt7n4,Uid:72cdc903-14e7-4d78-9259-398f38492d72,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\"" Sep 16 04:31:41.143951 kubelet[2672]: E0916 04:31:41.143918 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:41.907846 kubelet[2672]: E0916 04:31:41.907807 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:41.934653 kubelet[2672]: I0916 04:31:41.934499 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9p45s" podStartSLOduration=1.9317604849999999 podStartE2EDuration="1.931760485s" podCreationTimestamp="2025-09-16 04:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:31:41.146853342 +0000 UTC m=+8.129134321" watchObservedRunningTime="2025-09-16 04:31:41.931760485 +0000 UTC m=+8.914041424" Sep 16 04:31:42.137018 kubelet[2672]: E0916 04:31:42.136943 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:43.138050 kubelet[2672]: E0916 04:31:43.138000 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:46.042053 kubelet[2672]: E0916 04:31:46.041979 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:47.946186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519433924.mount: Deactivated successfully. Sep 16 04:31:47.978984 kubelet[2672]: E0916 04:31:47.978951 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:48.145472 kubelet[2672]: E0916 04:31:48.145313 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:49.182929 containerd[1535]: time="2025-09-16T04:31:49.182855541Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:49.183388 containerd[1535]: time="2025-09-16T04:31:49.183305069Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 16 04:31:49.184162 containerd[1535]: time="2025-09-16T04:31:49.184129037Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:49.186188 containerd[1535]: time="2025-09-16T04:31:49.186153492Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.297744916s" Sep 16 04:31:49.186235 containerd[1535]: time="2025-09-16T04:31:49.186189176Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 16 04:31:49.192414 containerd[1535]: time="2025-09-16T04:31:49.192196375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:31:49.202386 containerd[1535]: time="2025-09-16T04:31:49.202343334Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:31:49.207671 containerd[1535]: time="2025-09-16T04:31:49.207621055Z" level=info msg="Container befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:49.212484 containerd[1535]: time="2025-09-16T04:31:49.212448649Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\"" Sep 16 04:31:49.217359 containerd[1535]: time="2025-09-16T04:31:49.216128520Z" level=info msg="StartContainer for \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\"" Sep 16 04:31:49.217613 containerd[1535]: time="2025-09-16T04:31:49.217588596Z" level=info msg="connecting to shim befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" protocol=ttrpc version=3 Sep 16 04:31:49.263218 systemd[1]: Started cri-containerd-befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d.scope - libcontainer container befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d. Sep 16 04:31:49.296356 containerd[1535]: time="2025-09-16T04:31:49.296317290Z" level=info msg="StartContainer for \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" returns successfully" Sep 16 04:31:49.307420 systemd[1]: cri-containerd-befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d.scope: Deactivated successfully. Sep 16 04:31:49.324313 containerd[1535]: time="2025-09-16T04:31:49.324277904Z" level=info msg="received exit event container_id:\"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" id:\"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" pid:3089 exited_at:{seconds:1757997109 nanos:319088752}" Sep 16 04:31:49.324701 containerd[1535]: time="2025-09-16T04:31:49.324355792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" id:\"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" pid:3089 exited_at:{seconds:1757997109 nanos:319088752}" Sep 16 04:31:49.352342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d-rootfs.mount: Deactivated successfully. Sep 16 04:31:50.177905 kubelet[2672]: E0916 04:31:50.176435 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:50.182751 containerd[1535]: time="2025-09-16T04:31:50.182713807Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:31:50.226029 containerd[1535]: time="2025-09-16T04:31:50.225885730Z" level=info msg="Container a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:50.271005 containerd[1535]: time="2025-09-16T04:31:50.270938603Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\"" Sep 16 04:31:50.271697 containerd[1535]: time="2025-09-16T04:31:50.271663317Z" level=info msg="StartContainer for \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\"" Sep 16 04:31:50.272707 containerd[1535]: time="2025-09-16T04:31:50.272681019Z" level=info msg="connecting to shim a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" protocol=ttrpc version=3 Sep 16 04:31:50.295511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972365446.mount: Deactivated successfully. Sep 16 04:31:50.316158 systemd[1]: Started cri-containerd-a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437.scope - libcontainer container a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437. Sep 16 04:31:50.351767 containerd[1535]: time="2025-09-16T04:31:50.351713407Z" level=info msg="StartContainer for \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" returns successfully" Sep 16 04:31:50.359888 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:31:50.360121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:31:50.360754 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:31:50.362295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:31:50.364910 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:31:50.367341 systemd[1]: cri-containerd-a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437.scope: Deactivated successfully. Sep 16 04:31:50.371271 containerd[1535]: time="2025-09-16T04:31:50.371184775Z" level=info msg="received exit event container_id:\"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" id:\"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" pid:3143 exited_at:{seconds:1757997110 nanos:370368492}" Sep 16 04:31:50.371399 containerd[1535]: time="2025-09-16T04:31:50.371367673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" id:\"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" pid:3143 exited_at:{seconds:1757997110 nanos:370368492}" Sep 16 04:31:50.413209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:31:50.539261 update_engine[1526]: I20250916 04:31:50.539166 1526 update_attempter.cc:509] Updating boot flags... Sep 16 04:31:50.712911 containerd[1535]: time="2025-09-16T04:31:50.712689368Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:50.713772 containerd[1535]: time="2025-09-16T04:31:50.713746195Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 16 04:31:50.719354 containerd[1535]: time="2025-09-16T04:31:50.719304837Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:31:50.720895 containerd[1535]: time="2025-09-16T04:31:50.720835152Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.528606414s" Sep 16 04:31:50.720895 containerd[1535]: time="2025-09-16T04:31:50.720883637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 16 04:31:50.723128 containerd[1535]: time="2025-09-16T04:31:50.722989769Z" level=info msg="CreateContainer within sandbox \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:31:50.734790 containerd[1535]: time="2025-09-16T04:31:50.734166939Z" level=info msg="Container 3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:50.740381 containerd[1535]: time="2025-09-16T04:31:50.740342043Z" level=info msg="CreateContainer within sandbox \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\"" Sep 16 04:31:50.741026 containerd[1535]: time="2025-09-16T04:31:50.741001910Z" level=info msg="StartContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\"" Sep 16 04:31:50.742082 containerd[1535]: time="2025-09-16T04:31:50.742029614Z" level=info msg="connecting to shim 3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221" address="unix:///run/containerd/s/c458824c38cdcee222435c6d207e737e6448c7913e37cd6f6e1d1a8c470c090d" protocol=ttrpc version=3 Sep 16 04:31:50.765208 systemd[1]: Started cri-containerd-3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221.scope - libcontainer container 3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221. Sep 16 04:31:50.791136 containerd[1535]: time="2025-09-16T04:31:50.791027646Z" level=info msg="StartContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" returns successfully" Sep 16 04:31:51.189674 kubelet[2672]: E0916 04:31:51.189193 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:51.195976 kubelet[2672]: E0916 04:31:51.195928 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:51.200190 containerd[1535]: time="2025-09-16T04:31:51.200146443Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:31:51.211169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437-rootfs.mount: Deactivated successfully. Sep 16 04:31:51.218022 kubelet[2672]: I0916 04:31:51.217904 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kt7n4" podStartSLOduration=1.6417680369999998 podStartE2EDuration="11.217885508s" podCreationTimestamp="2025-09-16 04:31:40 +0000 UTC" firstStartedPulling="2025-09-16 04:31:41.14549856 +0000 UTC m=+8.127779539" lastFinishedPulling="2025-09-16 04:31:50.721616031 +0000 UTC m=+17.703897010" observedRunningTime="2025-09-16 04:31:51.215627891 +0000 UTC m=+18.197908870" watchObservedRunningTime="2025-09-16 04:31:51.217885508 +0000 UTC m=+18.200166487" Sep 16 04:31:51.221064 containerd[1535]: time="2025-09-16T04:31:51.219460179Z" level=info msg="Container 0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:51.220978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728560959.mount: Deactivated successfully. Sep 16 04:31:51.234606 containerd[1535]: time="2025-09-16T04:31:51.234541388Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\"" Sep 16 04:31:51.236330 containerd[1535]: time="2025-09-16T04:31:51.236180226Z" level=info msg="StartContainer for \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\"" Sep 16 04:31:51.239006 containerd[1535]: time="2025-09-16T04:31:51.238641702Z" level=info msg="connecting to shim 0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" protocol=ttrpc version=3 Sep 16 04:31:51.261175 systemd[1]: Started cri-containerd-0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0.scope - libcontainer container 0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0. Sep 16 04:31:51.326014 containerd[1535]: time="2025-09-16T04:31:51.324182483Z" level=info msg="StartContainer for \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" returns successfully" Sep 16 04:31:51.327500 systemd[1]: cri-containerd-0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0.scope: Deactivated successfully. Sep 16 04:31:51.327790 systemd[1]: cri-containerd-0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0.scope: Consumed 40ms CPU time, 7M memory peak, 8.2M read from disk. Sep 16 04:31:51.331232 containerd[1535]: time="2025-09-16T04:31:51.331128510Z" level=info msg="received exit event container_id:\"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" id:\"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" pid:3249 exited_at:{seconds:1757997111 nanos:330861684}" Sep 16 04:31:51.331232 containerd[1535]: time="2025-09-16T04:31:51.331195957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" id:\"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" pid:3249 exited_at:{seconds:1757997111 nanos:330861684}" Sep 16 04:31:51.371033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0-rootfs.mount: Deactivated successfully. Sep 16 04:31:52.200807 kubelet[2672]: E0916 04:31:52.200772 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:52.202284 kubelet[2672]: E0916 04:31:52.200870 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:52.204037 containerd[1535]: time="2025-09-16T04:31:52.203258296Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:31:52.217589 containerd[1535]: time="2025-09-16T04:31:52.217548962Z" level=info msg="Container 4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:52.225447 containerd[1535]: time="2025-09-16T04:31:52.225405841Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\"" Sep 16 04:31:52.226154 containerd[1535]: time="2025-09-16T04:31:52.226122946Z" level=info msg="StartContainer for \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\"" Sep 16 04:31:52.227007 containerd[1535]: time="2025-09-16T04:31:52.226969384Z" level=info msg="connecting to shim 4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" protocol=ttrpc version=3 Sep 16 04:31:52.254161 systemd[1]: Started cri-containerd-4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0.scope - libcontainer container 4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0. Sep 16 04:31:52.279544 systemd[1]: cri-containerd-4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0.scope: Deactivated successfully. Sep 16 04:31:52.280166 containerd[1535]: time="2025-09-16T04:31:52.280135685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" id:\"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" pid:3287 exited_at:{seconds:1757997112 nanos:279824337}" Sep 16 04:31:52.281213 containerd[1535]: time="2025-09-16T04:31:52.281176780Z" level=info msg="received exit event container_id:\"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" id:\"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" pid:3287 exited_at:{seconds:1757997112 nanos:279824337}" Sep 16 04:31:52.282873 containerd[1535]: time="2025-09-16T04:31:52.282785727Z" level=info msg="StartContainer for \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" returns successfully" Sep 16 04:31:52.299831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0-rootfs.mount: Deactivated successfully. Sep 16 04:31:53.208053 kubelet[2672]: E0916 04:31:53.207198 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:53.211279 containerd[1535]: time="2025-09-16T04:31:53.211150777Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:31:53.226563 containerd[1535]: time="2025-09-16T04:31:53.226490633Z" level=info msg="Container 300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:53.233183 containerd[1535]: time="2025-09-16T04:31:53.233134731Z" level=info msg="CreateContainer within sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\"" Sep 16 04:31:53.233622 containerd[1535]: time="2025-09-16T04:31:53.233597971Z" level=info msg="StartContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\"" Sep 16 04:31:53.243143 containerd[1535]: time="2025-09-16T04:31:53.243073676Z" level=info msg="connecting to shim 300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82" address="unix:///run/containerd/s/a589e71095ccd8e6f4a1bdbc5f51a36b0158691657c3d220568fa13503e67cd0" protocol=ttrpc version=3 Sep 16 04:31:53.271156 systemd[1]: Started cri-containerd-300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82.scope - libcontainer container 300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82. Sep 16 04:31:53.304330 containerd[1535]: time="2025-09-16T04:31:53.304293487Z" level=info msg="StartContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" returns successfully" Sep 16 04:31:53.392069 containerd[1535]: time="2025-09-16T04:31:53.392024926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" id:\"a02d96f37cebfc37530680bbe0c177e564fe6e7b6ff0d85940b11c3236421bb7\" pid:3358 exited_at:{seconds:1757997113 nanos:391702298}" Sep 16 04:31:53.434448 kubelet[2672]: I0916 04:31:53.434302 2672 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:31:53.471942 systemd[1]: Created slice kubepods-burstable-poda8eb9202_8bfe_49a3_bdc5_3b3723a2beb8.slice - libcontainer container kubepods-burstable-poda8eb9202_8bfe_49a3_bdc5_3b3723a2beb8.slice. Sep 16 04:31:53.479611 systemd[1]: Created slice kubepods-burstable-podb879316d_145a_4c2e_95e4_40c44b398fb6.slice - libcontainer container kubepods-burstable-podb879316d_145a_4c2e_95e4_40c44b398fb6.slice. Sep 16 04:31:53.537582 kubelet[2672]: I0916 04:31:53.537541 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49g5l\" (UniqueName: \"kubernetes.io/projected/a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8-kube-api-access-49g5l\") pod \"coredns-668d6bf9bc-6lfrk\" (UID: \"a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8\") " pod="kube-system/coredns-668d6bf9bc-6lfrk" Sep 16 04:31:53.537582 kubelet[2672]: I0916 04:31:53.537582 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6ql\" (UniqueName: \"kubernetes.io/projected/b879316d-145a-4c2e-95e4-40c44b398fb6-kube-api-access-zr6ql\") pod \"coredns-668d6bf9bc-w2bjc\" (UID: \"b879316d-145a-4c2e-95e4-40c44b398fb6\") " pod="kube-system/coredns-668d6bf9bc-w2bjc" Sep 16 04:31:53.537742 kubelet[2672]: I0916 04:31:53.537614 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b879316d-145a-4c2e-95e4-40c44b398fb6-config-volume\") pod \"coredns-668d6bf9bc-w2bjc\" (UID: \"b879316d-145a-4c2e-95e4-40c44b398fb6\") " pod="kube-system/coredns-668d6bf9bc-w2bjc" Sep 16 04:31:53.537742 kubelet[2672]: I0916 04:31:53.537644 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8-config-volume\") pod \"coredns-668d6bf9bc-6lfrk\" (UID: \"a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8\") " pod="kube-system/coredns-668d6bf9bc-6lfrk" Sep 16 04:31:53.778370 kubelet[2672]: E0916 04:31:53.778254 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:53.779472 containerd[1535]: time="2025-09-16T04:31:53.779376133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6lfrk,Uid:a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:53.782598 kubelet[2672]: E0916 04:31:53.782570 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:53.783369 containerd[1535]: time="2025-09-16T04:31:53.783340238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2bjc,Uid:b879316d-145a-4c2e-95e4-40c44b398fb6,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:54.215513 kubelet[2672]: E0916 04:31:54.215488 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:55.217060 kubelet[2672]: E0916 04:31:55.217030 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:55.345043 systemd-networkd[1444]: cilium_host: Link UP Sep 16 04:31:55.345547 systemd-networkd[1444]: cilium_net: Link UP Sep 16 04:31:55.345690 systemd-networkd[1444]: cilium_host: Gained carrier Sep 16 04:31:55.345812 systemd-networkd[1444]: cilium_net: Gained carrier Sep 16 04:31:55.368174 systemd-networkd[1444]: cilium_host: Gained IPv6LL Sep 16 04:31:55.418632 systemd-networkd[1444]: cilium_vxlan: Link UP Sep 16 04:31:55.418638 systemd-networkd[1444]: cilium_vxlan: Gained carrier Sep 16 04:31:55.570171 systemd-networkd[1444]: cilium_net: Gained IPv6LL Sep 16 04:31:55.677047 kernel: NET: Registered PF_ALG protocol family Sep 16 04:31:56.226215 kubelet[2672]: E0916 04:31:56.226087 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:56.234614 systemd-networkd[1444]: lxc_health: Link UP Sep 16 04:31:56.241286 systemd-networkd[1444]: lxc_health: Gained carrier Sep 16 04:31:56.832061 kernel: eth0: renamed from tmp2d77f Sep 16 04:31:56.833855 systemd-networkd[1444]: lxc59f13c86518b: Link UP Sep 16 04:31:56.847636 kernel: eth0: renamed from tmp54097 Sep 16 04:31:56.849320 systemd-networkd[1444]: lxccfa0c9e81496: Link UP Sep 16 04:31:56.849776 systemd-networkd[1444]: lxc59f13c86518b: Gained carrier Sep 16 04:31:56.851865 systemd-networkd[1444]: lxccfa0c9e81496: Gained carrier Sep 16 04:31:56.856227 kubelet[2672]: I0916 04:31:56.855645 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4c28d" podStartSLOduration=8.548705788 podStartE2EDuration="16.855624862s" podCreationTimestamp="2025-09-16 04:31:40 +0000 UTC" firstStartedPulling="2025-09-16 04:31:40.885086241 +0000 UTC m=+7.867367180" lastFinishedPulling="2025-09-16 04:31:49.192005275 +0000 UTC m=+16.174286254" observedRunningTime="2025-09-16 04:31:54.23237419 +0000 UTC m=+21.214655169" watchObservedRunningTime="2025-09-16 04:31:56.855624862 +0000 UTC m=+23.837905841" Sep 16 04:31:57.221460 kubelet[2672]: E0916 04:31:57.220985 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:31:57.394228 systemd-networkd[1444]: cilium_vxlan: Gained IPv6LL Sep 16 04:31:57.586169 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 16 04:31:58.354168 systemd-networkd[1444]: lxccfa0c9e81496: Gained IPv6LL Sep 16 04:31:58.866539 systemd-networkd[1444]: lxc59f13c86518b: Gained IPv6LL Sep 16 04:32:00.260599 kubelet[2672]: I0916 04:32:00.260549 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:32:00.261396 kubelet[2672]: E0916 04:32:00.261373 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:00.425691 containerd[1535]: time="2025-09-16T04:32:00.425174926Z" level=info msg="connecting to shim 540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5" address="unix:///run/containerd/s/fce06fb48b47d2b241a644e07de77a6583d8c0461b01454bb31da4a7b649db8f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:32:00.429573 containerd[1535]: time="2025-09-16T04:32:00.429536202Z" level=info msg="connecting to shim 2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b" address="unix:///run/containerd/s/af109ace00e644b8109f11e0b2342731fbc80bea2e6b46ae55061855950ee3bd" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:32:00.451167 systemd[1]: Started cri-containerd-540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5.scope - libcontainer container 540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5. Sep 16 04:32:00.454032 systemd[1]: Started cri-containerd-2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b.scope - libcontainer container 2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b. Sep 16 04:32:00.465043 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:32:00.465670 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:32:00.489189 containerd[1535]: time="2025-09-16T04:32:00.489148213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2bjc,Uid:b879316d-145a-4c2e-95e4-40c44b398fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5\"" Sep 16 04:32:00.492257 kubelet[2672]: E0916 04:32:00.492194 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:00.493132 containerd[1535]: time="2025-09-16T04:32:00.493096862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6lfrk,Uid:a8eb9202-8bfe-49a3-bdc5-3b3723a2beb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b\"" Sep 16 04:32:00.493644 kubelet[2672]: E0916 04:32:00.493622 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:00.495160 containerd[1535]: time="2025-09-16T04:32:00.495125911Z" level=info msg="CreateContainer within sandbox \"540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:32:00.496224 containerd[1535]: time="2025-09-16T04:32:00.496187898Z" level=info msg="CreateContainer within sandbox \"2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:32:00.504014 containerd[1535]: time="2025-09-16T04:32:00.503862583Z" level=info msg="Container c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:00.507231 containerd[1535]: time="2025-09-16T04:32:00.507200875Z" level=info msg="Container 21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:00.511744 containerd[1535]: time="2025-09-16T04:32:00.511627395Z" level=info msg="CreateContainer within sandbox \"540978cc4b5e44d4a35172dff1b92bc5def846e031c1f3ea65b2872396a609a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404\"" Sep 16 04:32:00.512282 containerd[1535]: time="2025-09-16T04:32:00.512245234Z" level=info msg="StartContainer for \"c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404\"" Sep 16 04:32:00.513350 containerd[1535]: time="2025-09-16T04:32:00.513325862Z" level=info msg="connecting to shim c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404" address="unix:///run/containerd/s/fce06fb48b47d2b241a644e07de77a6583d8c0461b01454bb31da4a7b649db8f" protocol=ttrpc version=3 Sep 16 04:32:00.515223 containerd[1535]: time="2025-09-16T04:32:00.515186980Z" level=info msg="CreateContainer within sandbox \"2d77f669ca18ec18681b0e2fa9e1f21889ec327329b0c570cfbc4de022e5db2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737\"" Sep 16 04:32:00.516642 containerd[1535]: time="2025-09-16T04:32:00.516603229Z" level=info msg="StartContainer for \"21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737\"" Sep 16 04:32:00.519032 containerd[1535]: time="2025-09-16T04:32:00.519004341Z" level=info msg="connecting to shim 21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737" address="unix:///run/containerd/s/af109ace00e644b8109f11e0b2342731fbc80bea2e6b46ae55061855950ee3bd" protocol=ttrpc version=3 Sep 16 04:32:00.537175 systemd[1]: Started cri-containerd-c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404.scope - libcontainer container c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404. Sep 16 04:32:00.540170 systemd[1]: Started cri-containerd-21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737.scope - libcontainer container 21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737. Sep 16 04:32:00.570732 containerd[1535]: time="2025-09-16T04:32:00.570693611Z" level=info msg="StartContainer for \"c5c94bfd0ed6823276ab490f70c73a805feabd2c9d322f003b9cf5428e8a4404\" returns successfully" Sep 16 04:32:00.574393 containerd[1535]: time="2025-09-16T04:32:00.573980539Z" level=info msg="StartContainer for \"21088bda7d2e127ee1f4b3dfa0a9000c03bc07e57313ecf8558488d67b32d737\" returns successfully" Sep 16 04:32:01.232841 kubelet[2672]: E0916 04:32:01.232813 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:01.234428 kubelet[2672]: E0916 04:32:01.234347 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:01.234428 kubelet[2672]: E0916 04:32:01.234376 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:01.245018 kubelet[2672]: I0916 04:32:01.244634 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w2bjc" podStartSLOduration=21.244617608 podStartE2EDuration="21.244617608s" podCreationTimestamp="2025-09-16 04:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:32:01.243427176 +0000 UTC m=+28.225708155" watchObservedRunningTime="2025-09-16 04:32:01.244617608 +0000 UTC m=+28.226898587" Sep 16 04:32:01.256106 kubelet[2672]: I0916 04:32:01.255837 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6lfrk" podStartSLOduration=21.255817807 podStartE2EDuration="21.255817807s" podCreationTimestamp="2025-09-16 04:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:32:01.255064482 +0000 UTC m=+28.237345501" watchObservedRunningTime="2025-09-16 04:32:01.255817807 +0000 UTC m=+28.238098786" Sep 16 04:32:01.422533 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Sep 16 04:32:01.485327 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:01.486212 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:01.490070 systemd-logind[1521]: New session 8 of user core. Sep 16 04:32:01.500117 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:32:01.620443 sshd[4013]: Connection closed by 10.0.0.1 port 32774 Sep 16 04:32:01.620917 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:01.624193 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:32774.service: Deactivated successfully. Sep 16 04:32:01.625821 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:32:01.626524 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:32:01.627666 systemd-logind[1521]: Removed session 8. Sep 16 04:32:02.235523 kubelet[2672]: E0916 04:32:02.235495 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:03.237657 kubelet[2672]: E0916 04:32:03.237629 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:06.642604 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:32778.service - OpenSSH per-connection server daemon (10.0.0.1:32778). Sep 16 04:32:06.708665 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:06.709692 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:06.713962 systemd-logind[1521]: New session 9 of user core. Sep 16 04:32:06.720233 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:32:06.832916 sshd[4033]: Connection closed by 10.0.0.1 port 32778 Sep 16 04:32:06.833255 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:06.837248 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:32778.service: Deactivated successfully. Sep 16 04:32:06.839812 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:32:06.841315 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:32:06.842714 systemd-logind[1521]: Removed session 9. Sep 16 04:32:11.233818 kubelet[2672]: E0916 04:32:11.233744 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:11.265074 kubelet[2672]: E0916 04:32:11.264953 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:11.846768 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:33280.service - OpenSSH per-connection server daemon (10.0.0.1:33280). Sep 16 04:32:11.891177 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 33280 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:11.892531 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:11.896874 systemd-logind[1521]: New session 10 of user core. Sep 16 04:32:11.911197 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:32:12.033757 sshd[4057]: Connection closed by 10.0.0.1 port 33280 Sep 16 04:32:12.033261 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:12.041578 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:33280.service: Deactivated successfully. Sep 16 04:32:12.043703 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:32:12.044646 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:32:12.047868 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:33288.service - OpenSSH per-connection server daemon (10.0.0.1:33288). Sep 16 04:32:12.055529 systemd-logind[1521]: Removed session 10. Sep 16 04:32:12.100950 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 33288 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:12.102228 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:12.106220 systemd-logind[1521]: New session 11 of user core. Sep 16 04:32:12.119178 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:32:12.296364 sshd[4074]: Connection closed by 10.0.0.1 port 33288 Sep 16 04:32:12.297112 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:12.309406 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:33288.service: Deactivated successfully. Sep 16 04:32:12.313166 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:32:12.315867 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:32:12.320563 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:33300.service - OpenSSH per-connection server daemon (10.0.0.1:33300). Sep 16 04:32:12.322301 systemd-logind[1521]: Removed session 11. Sep 16 04:32:12.374978 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 33300 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:12.375956 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:12.379942 systemd-logind[1521]: New session 12 of user core. Sep 16 04:32:12.390200 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:32:12.507466 sshd[4089]: Connection closed by 10.0.0.1 port 33300 Sep 16 04:32:12.507823 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:12.511781 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:33300.service: Deactivated successfully. Sep 16 04:32:12.513597 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:32:12.514333 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:32:12.515449 systemd-logind[1521]: Removed session 12. Sep 16 04:32:17.523793 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:33312.service - OpenSSH per-connection server daemon (10.0.0.1:33312). Sep 16 04:32:17.592466 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 33312 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:17.594576 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:17.600117 systemd-logind[1521]: New session 13 of user core. Sep 16 04:32:17.611190 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:32:17.749897 sshd[4108]: Connection closed by 10.0.0.1 port 33312 Sep 16 04:32:17.751943 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:17.757854 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:32:17.758229 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:33312.service: Deactivated successfully. Sep 16 04:32:17.762779 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:32:17.767641 systemd-logind[1521]: Removed session 13. Sep 16 04:32:22.762824 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). Sep 16 04:32:22.840807 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:22.841714 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:22.846845 systemd-logind[1521]: New session 14 of user core. Sep 16 04:32:22.856278 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:32:23.002311 sshd[4124]: Connection closed by 10.0.0.1 port 59444 Sep 16 04:32:23.003371 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:23.015435 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:59444.service: Deactivated successfully. Sep 16 04:32:23.018537 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:32:23.019837 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:32:23.023960 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Sep 16 04:32:23.025105 systemd-logind[1521]: Removed session 14. Sep 16 04:32:23.076229 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:23.077558 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:23.083083 systemd-logind[1521]: New session 15 of user core. Sep 16 04:32:23.093242 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:32:23.295830 sshd[4140]: Connection closed by 10.0.0.1 port 59450 Sep 16 04:32:23.296521 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:23.312107 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:59450.service: Deactivated successfully. Sep 16 04:32:23.314390 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:32:23.315458 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:32:23.317580 systemd-logind[1521]: Removed session 15. Sep 16 04:32:23.318700 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:59466.service - OpenSSH per-connection server daemon (10.0.0.1:59466). Sep 16 04:32:23.379517 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 59466 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:23.380680 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:23.384811 systemd-logind[1521]: New session 16 of user core. Sep 16 04:32:23.402186 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:32:24.035757 sshd[4154]: Connection closed by 10.0.0.1 port 59466 Sep 16 04:32:24.036386 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:24.043902 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:59466.service: Deactivated successfully. Sep 16 04:32:24.051126 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:32:24.054636 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:32:24.059728 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:59472.service - OpenSSH per-connection server daemon (10.0.0.1:59472). Sep 16 04:32:24.060437 systemd-logind[1521]: Removed session 16. Sep 16 04:32:24.114271 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 59472 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:24.115445 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:24.123540 systemd-logind[1521]: New session 17 of user core. Sep 16 04:32:24.138174 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:32:24.367669 sshd[4176]: Connection closed by 10.0.0.1 port 59472 Sep 16 04:32:24.365875 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:24.377066 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:59472.service: Deactivated successfully. Sep 16 04:32:24.380516 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:32:24.381998 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:32:24.384939 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:59482.service - OpenSSH per-connection server daemon (10.0.0.1:59482). Sep 16 04:32:24.385788 systemd-logind[1521]: Removed session 17. Sep 16 04:32:24.442619 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 59482 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:24.444365 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:24.449043 systemd-logind[1521]: New session 18 of user core. Sep 16 04:32:24.459236 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:32:24.570525 sshd[4192]: Connection closed by 10.0.0.1 port 59482 Sep 16 04:32:24.570823 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:24.574212 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:32:24.574432 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:59482.service: Deactivated successfully. Sep 16 04:32:24.576109 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:32:24.578479 systemd-logind[1521]: Removed session 18. Sep 16 04:32:29.594387 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:59486.service - OpenSSH per-connection server daemon (10.0.0.1:59486). Sep 16 04:32:29.646770 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 59486 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:29.647843 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:29.651832 systemd-logind[1521]: New session 19 of user core. Sep 16 04:32:29.656132 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:32:29.777846 kernel: hrtimer: interrupt took 12382507 ns Sep 16 04:32:29.784984 sshd[4210]: Connection closed by 10.0.0.1 port 59486 Sep 16 04:32:29.784907 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:29.788693 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:32:29.788793 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:59486.service: Deactivated successfully. Sep 16 04:32:29.790683 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:32:29.792528 systemd-logind[1521]: Removed session 19. Sep 16 04:32:34.800945 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Sep 16 04:32:34.885334 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:34.887178 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:34.891291 systemd-logind[1521]: New session 20 of user core. Sep 16 04:32:34.900172 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:32:35.021858 sshd[4228]: Connection closed by 10.0.0.1 port 55068 Sep 16 04:32:35.022194 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:35.026916 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:55068.service: Deactivated successfully. Sep 16 04:32:35.028515 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:32:35.031735 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:32:35.032679 systemd-logind[1521]: Removed session 20. Sep 16 04:32:40.035124 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:59398.service - OpenSSH per-connection server daemon (10.0.0.1:59398). Sep 16 04:32:40.102393 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 59398 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:40.105092 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:40.111461 systemd-logind[1521]: New session 21 of user core. Sep 16 04:32:40.124705 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:32:40.243952 sshd[4244]: Connection closed by 10.0.0.1 port 59398 Sep 16 04:32:40.244459 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:40.252986 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:59398.service: Deactivated successfully. Sep 16 04:32:40.256108 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:32:40.257622 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:32:40.262002 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:59408.service - OpenSSH per-connection server daemon (10.0.0.1:59408). Sep 16 04:32:40.263949 systemd-logind[1521]: Removed session 21. Sep 16 04:32:40.309984 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 59408 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:40.312884 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:40.318073 systemd-logind[1521]: New session 22 of user core. Sep 16 04:32:40.328155 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:32:42.476001 containerd[1535]: time="2025-09-16T04:32:42.475957170Z" level=info msg="StopContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" with timeout 30 (s)" Sep 16 04:32:42.476869 containerd[1535]: time="2025-09-16T04:32:42.476743154Z" level=info msg="Stop container \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" with signal terminated" Sep 16 04:32:42.490252 systemd[1]: cri-containerd-3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221.scope: Deactivated successfully. Sep 16 04:32:42.493077 containerd[1535]: time="2025-09-16T04:32:42.493022422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" id:\"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" pid:3217 exited_at:{seconds:1757997162 nanos:492240798}" Sep 16 04:32:42.493077 containerd[1535]: time="2025-09-16T04:32:42.493066821Z" level=info msg="received exit event container_id:\"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" id:\"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" pid:3217 exited_at:{seconds:1757997162 nanos:492240798}" Sep 16 04:32:42.504909 containerd[1535]: time="2025-09-16T04:32:42.504867140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" id:\"01cb05aa9d1821db6831d78b85e6d45308ac0e728226608e1fdffa8782140df2\" pid:4292 exited_at:{seconds:1757997162 nanos:504620545}" Sep 16 04:32:42.510947 containerd[1535]: time="2025-09-16T04:32:42.510911697Z" level=info msg="StopContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" with timeout 2 (s)" Sep 16 04:32:42.513022 containerd[1535]: time="2025-09-16T04:32:42.512962855Z" level=info msg="Stop container \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" with signal terminated" Sep 16 04:32:42.518382 containerd[1535]: time="2025-09-16T04:32:42.518260067Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:32:42.520510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221-rootfs.mount: Deactivated successfully. Sep 16 04:32:42.522602 systemd-networkd[1444]: lxc_health: Link DOWN Sep 16 04:32:42.522950 systemd-networkd[1444]: lxc_health: Lost carrier Sep 16 04:32:42.530859 containerd[1535]: time="2025-09-16T04:32:42.530811091Z" level=info msg="StopContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" returns successfully" Sep 16 04:32:42.533910 containerd[1535]: time="2025-09-16T04:32:42.533861669Z" level=info msg="StopPodSandbox for \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\"" Sep 16 04:32:42.533979 containerd[1535]: time="2025-09-16T04:32:42.533946627Z" level=info msg="Container to stop \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.541400 systemd[1]: cri-containerd-c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3.scope: Deactivated successfully. Sep 16 04:32:42.543104 containerd[1535]: time="2025-09-16T04:32:42.543065561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" pid:2921 exit_status:137 exited_at:{seconds:1757997162 nanos:542786767}" Sep 16 04:32:42.544414 systemd[1]: cri-containerd-300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82.scope: Deactivated successfully. Sep 16 04:32:42.546126 systemd[1]: cri-containerd-300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82.scope: Consumed 6.186s CPU time, 121.4M memory peak, 128K read from disk, 14.2M written to disk. Sep 16 04:32:42.547705 containerd[1535]: time="2025-09-16T04:32:42.547627988Z" level=info msg="received exit event container_id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" pid:3326 exited_at:{seconds:1757997162 nanos:547154638}" Sep 16 04:32:42.567872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82-rootfs.mount: Deactivated successfully. Sep 16 04:32:42.574864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3-rootfs.mount: Deactivated successfully. Sep 16 04:32:42.578215 containerd[1535]: time="2025-09-16T04:32:42.578180645Z" level=info msg="shim disconnected" id=c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3 namespace=k8s.io Sep 16 04:32:42.578325 containerd[1535]: time="2025-09-16T04:32:42.578211444Z" level=warning msg="cleaning up after shim disconnected" id=c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3 namespace=k8s.io Sep 16 04:32:42.578325 containerd[1535]: time="2025-09-16T04:32:42.578244083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:32:42.580268 containerd[1535]: time="2025-09-16T04:32:42.580234123Z" level=info msg="StopContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" returns successfully" Sep 16 04:32:42.580825 containerd[1535]: time="2025-09-16T04:32:42.580759792Z" level=info msg="StopPodSandbox for \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\"" Sep 16 04:32:42.581131 containerd[1535]: time="2025-09-16T04:32:42.580832551Z" level=info msg="Container to stop \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.581131 containerd[1535]: time="2025-09-16T04:32:42.580847510Z" level=info msg="Container to stop \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.581131 containerd[1535]: time="2025-09-16T04:32:42.580856470Z" level=info msg="Container to stop \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.581131 containerd[1535]: time="2025-09-16T04:32:42.580865230Z" level=info msg="Container to stop \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.581131 containerd[1535]: time="2025-09-16T04:32:42.580873030Z" level=info msg="Container to stop \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:32:42.586175 systemd[1]: cri-containerd-5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d.scope: Deactivated successfully. Sep 16 04:32:42.597019 containerd[1535]: time="2025-09-16T04:32:42.596957142Z" level=info msg="received exit event sandbox_id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" exit_status:137 exited_at:{seconds:1757997162 nanos:542786767}" Sep 16 04:32:42.598817 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3-shm.mount: Deactivated successfully. Sep 16 04:32:42.599512 containerd[1535]: time="2025-09-16T04:32:42.598063959Z" level=error msg="Failed to handle event container_id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" pid:2921 exit_status:137 exited_at:{seconds:1757997162 nanos:542786767} for c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 16 04:32:42.599512 containerd[1535]: time="2025-09-16T04:32:42.599128577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" id:\"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" pid:3326 exited_at:{seconds:1757997162 nanos:547154638}" Sep 16 04:32:42.599512 containerd[1535]: time="2025-09-16T04:32:42.599185816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" id:\"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" pid:2824 exit_status:137 exited_at:{seconds:1757997162 nanos:588027884}" Sep 16 04:32:42.599512 containerd[1535]: time="2025-09-16T04:32:42.599302734Z" level=info msg="TearDown network for sandbox \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" successfully" Sep 16 04:32:42.599512 containerd[1535]: time="2025-09-16T04:32:42.599335533Z" level=info msg="StopPodSandbox for \"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" returns successfully" Sep 16 04:32:42.617923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d-rootfs.mount: Deactivated successfully. Sep 16 04:32:42.634269 containerd[1535]: time="2025-09-16T04:32:42.634195262Z" level=info msg="received exit event sandbox_id:\"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" exit_status:137 exited_at:{seconds:1757997162 nanos:588027884}" Sep 16 04:32:42.634434 containerd[1535]: time="2025-09-16T04:32:42.634407178Z" level=info msg="TearDown network for sandbox \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" successfully" Sep 16 04:32:42.634463 containerd[1535]: time="2025-09-16T04:32:42.634432737Z" level=info msg="StopPodSandbox for \"5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d\" returns successfully" Sep 16 04:32:42.635865 containerd[1535]: time="2025-09-16T04:32:42.635805069Z" level=info msg="shim disconnected" id=5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d namespace=k8s.io Sep 16 04:32:42.636204 containerd[1535]: time="2025-09-16T04:32:42.636055384Z" level=warning msg="cleaning up after shim disconnected" id=5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d namespace=k8s.io Sep 16 04:32:42.636204 containerd[1535]: time="2025-09-16T04:32:42.636130183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:32:42.743498 kubelet[2672]: I0916 04:32:42.743379 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-bpf-maps\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.743896 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cni-path\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.743923 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-run\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.743941 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-lib-modules\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.743964 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klblc\" (UniqueName: \"kubernetes.io/projected/72cdc903-14e7-4d78-9259-398f38492d72-kube-api-access-klblc\") pod \"72cdc903-14e7-4d78-9259-398f38492d72\" (UID: \"72cdc903-14e7-4d78-9259-398f38492d72\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.743986 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-etc-cni-netd\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.744305 kubelet[2672]: I0916 04:32:42.744034 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-xtables-lock\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744052 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-kernel\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744067 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-cgroup\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744083 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-config-path\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744103 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-hubble-tls\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744123 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp2bb\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-kube-api-access-bp2bb\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745162 kubelet[2672]: I0916 04:32:42.744139 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c0c8707-79a9-4174-8420-7251c2494f83-clustermesh-secrets\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745299 kubelet[2672]: I0916 04:32:42.744153 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-hostproc\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745299 kubelet[2672]: I0916 04:32:42.744172 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72cdc903-14e7-4d78-9259-398f38492d72-cilium-config-path\") pod \"72cdc903-14e7-4d78-9259-398f38492d72\" (UID: \"72cdc903-14e7-4d78-9259-398f38492d72\") " Sep 16 04:32:42.745299 kubelet[2672]: I0916 04:32:42.744187 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-net\") pod \"8c0c8707-79a9-4174-8420-7251c2494f83\" (UID: \"8c0c8707-79a9-4174-8420-7251c2494f83\") " Sep 16 04:32:42.745977 kubelet[2672]: I0916 04:32:42.745676 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.745977 kubelet[2672]: I0916 04:32:42.745684 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.745977 kubelet[2672]: I0916 04:32:42.745688 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.745977 kubelet[2672]: I0916 04:32:42.745727 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cni-path" (OuterVolumeSpecName: "cni-path") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.745977 kubelet[2672]: I0916 04:32:42.745744 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.746153 kubelet[2672]: I0916 04:32:42.745749 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.746153 kubelet[2672]: I0916 04:32:42.745765 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.746153 kubelet[2672]: I0916 04:32:42.745768 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.746153 kubelet[2672]: I0916 04:32:42.745684 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.746153 kubelet[2672]: I0916 04:32:42.745789 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-hostproc" (OuterVolumeSpecName: "hostproc") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:32:42.747456 kubelet[2672]: I0916 04:32:42.747423 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:32:42.757037 kubelet[2672]: I0916 04:32:42.756979 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cdc903-14e7-4d78-9259-398f38492d72-kube-api-access-klblc" (OuterVolumeSpecName: "kube-api-access-klblc") pod "72cdc903-14e7-4d78-9259-398f38492d72" (UID: "72cdc903-14e7-4d78-9259-398f38492d72"). InnerVolumeSpecName "kube-api-access-klblc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:32:42.757133 kubelet[2672]: I0916 04:32:42.757041 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:32:42.758116 kubelet[2672]: I0916 04:32:42.758079 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0c8707-79a9-4174-8420-7251c2494f83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:32:42.758517 kubelet[2672]: I0916 04:32:42.758466 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72cdc903-14e7-4d78-9259-398f38492d72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72cdc903-14e7-4d78-9259-398f38492d72" (UID: "72cdc903-14e7-4d78-9259-398f38492d72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:32:42.758916 kubelet[2672]: I0916 04:32:42.758897 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-kube-api-access-bp2bb" (OuterVolumeSpecName: "kube-api-access-bp2bb") pod "8c0c8707-79a9-4174-8420-7251c2494f83" (UID: "8c0c8707-79a9-4174-8420-7251c2494f83"). InnerVolumeSpecName "kube-api-access-bp2bb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:32:42.845013 kubelet[2672]: I0916 04:32:42.844953 2672 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845013 kubelet[2672]: I0916 04:32:42.845019 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845033 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845044 2672 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845052 2672 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845065 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bp2bb\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-kube-api-access-bp2bb\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845073 2672 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c0c8707-79a9-4174-8420-7251c2494f83-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845081 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72cdc903-14e7-4d78-9259-398f38492d72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845088 2672 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845211 kubelet[2672]: I0916 04:32:42.845096 2672 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c0c8707-79a9-4174-8420-7251c2494f83-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845104 2672 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845112 2672 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845119 2672 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845127 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845134 2672 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0c8707-79a9-4174-8420-7251c2494f83-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:42.845392 kubelet[2672]: I0916 04:32:42.845142 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-klblc\" (UniqueName: \"kubernetes.io/projected/72cdc903-14e7-4d78-9259-398f38492d72-kube-api-access-klblc\") on node \"localhost\" DevicePath \"\"" Sep 16 04:32:43.111497 systemd[1]: Removed slice kubepods-burstable-pod8c0c8707_79a9_4174_8420_7251c2494f83.slice - libcontainer container kubepods-burstable-pod8c0c8707_79a9_4174_8420_7251c2494f83.slice. Sep 16 04:32:43.111614 systemd[1]: kubepods-burstable-pod8c0c8707_79a9_4174_8420_7251c2494f83.slice: Consumed 6.289s CPU time, 121.8M memory peak, 8.3M read from disk, 14.2M written to disk. Sep 16 04:32:43.113257 systemd[1]: Removed slice kubepods-besteffort-pod72cdc903_14e7_4d78_9259_398f38492d72.slice - libcontainer container kubepods-besteffort-pod72cdc903_14e7_4d78_9259_398f38492d72.slice. Sep 16 04:32:43.152603 kubelet[2672]: E0916 04:32:43.152545 2672 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:32:43.339825 kubelet[2672]: I0916 04:32:43.339716 2672 scope.go:117] "RemoveContainer" containerID="300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82" Sep 16 04:32:43.343177 containerd[1535]: time="2025-09-16T04:32:43.343139280Z" level=info msg="RemoveContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\"" Sep 16 04:32:43.350551 containerd[1535]: time="2025-09-16T04:32:43.350438859Z" level=info msg="RemoveContainer for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" returns successfully" Sep 16 04:32:43.350874 kubelet[2672]: I0916 04:32:43.350840 2672 scope.go:117] "RemoveContainer" containerID="4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0" Sep 16 04:32:43.354327 containerd[1535]: time="2025-09-16T04:32:43.354292745Z" level=info msg="RemoveContainer for \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\"" Sep 16 04:32:43.358497 containerd[1535]: time="2025-09-16T04:32:43.358464905Z" level=info msg="RemoveContainer for \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" returns successfully" Sep 16 04:32:43.358681 kubelet[2672]: I0916 04:32:43.358657 2672 scope.go:117] "RemoveContainer" containerID="0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0" Sep 16 04:32:43.363378 containerd[1535]: time="2025-09-16T04:32:43.363304052Z" level=info msg="RemoveContainer for \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\"" Sep 16 04:32:43.368768 containerd[1535]: time="2025-09-16T04:32:43.368720388Z" level=info msg="RemoveContainer for \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" returns successfully" Sep 16 04:32:43.369035 kubelet[2672]: I0916 04:32:43.368911 2672 scope.go:117] "RemoveContainer" containerID="a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437" Sep 16 04:32:43.370223 containerd[1535]: time="2025-09-16T04:32:43.370196079Z" level=info msg="RemoveContainer for \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\"" Sep 16 04:32:43.372987 containerd[1535]: time="2025-09-16T04:32:43.372918507Z" level=info msg="RemoveContainer for \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" returns successfully" Sep 16 04:32:43.373169 kubelet[2672]: I0916 04:32:43.373144 2672 scope.go:117] "RemoveContainer" containerID="befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d" Sep 16 04:32:43.374513 containerd[1535]: time="2025-09-16T04:32:43.374489797Z" level=info msg="RemoveContainer for \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\"" Sep 16 04:32:43.377203 containerd[1535]: time="2025-09-16T04:32:43.377172505Z" level=info msg="RemoveContainer for \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" returns successfully" Sep 16 04:32:43.377346 kubelet[2672]: I0916 04:32:43.377321 2672 scope.go:117] "RemoveContainer" containerID="300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82" Sep 16 04:32:43.377620 containerd[1535]: time="2025-09-16T04:32:43.377586817Z" level=error msg="ContainerStatus for \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\": not found" Sep 16 04:32:43.377744 kubelet[2672]: E0916 04:32:43.377722 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\": not found" containerID="300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82" Sep 16 04:32:43.382141 kubelet[2672]: I0916 04:32:43.382029 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82"} err="failed to get container status \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\": rpc error: code = NotFound desc = an error occurred when try to find container \"300044dc853613b0fe18353b6c20a92f8c2ed4fcffec277cb0226c6ebe684e82\": not found" Sep 16 04:32:43.382185 kubelet[2672]: I0916 04:32:43.382147 2672 scope.go:117] "RemoveContainer" containerID="4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0" Sep 16 04:32:43.382396 containerd[1535]: time="2025-09-16T04:32:43.382362725Z" level=error msg="ContainerStatus for \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\": not found" Sep 16 04:32:43.382550 kubelet[2672]: E0916 04:32:43.382516 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\": not found" containerID="4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0" Sep 16 04:32:43.382597 kubelet[2672]: I0916 04:32:43.382548 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0"} err="failed to get container status \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4828b3e9919d1ac89ae4197951fac7fd5739e3f33372a1d98016b93818372bc0\": not found" Sep 16 04:32:43.382597 kubelet[2672]: I0916 04:32:43.382566 2672 scope.go:117] "RemoveContainer" containerID="0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0" Sep 16 04:32:43.382753 containerd[1535]: time="2025-09-16T04:32:43.382725798Z" level=error msg="ContainerStatus for \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\": not found" Sep 16 04:32:43.382898 kubelet[2672]: E0916 04:32:43.382873 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\": not found" containerID="0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0" Sep 16 04:32:43.382975 kubelet[2672]: I0916 04:32:43.382954 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0"} err="failed to get container status \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ff36b3a86d252c52317ba9d61d8452092e118df54f11e73b724b130b47e77d0\": not found" Sep 16 04:32:43.383134 kubelet[2672]: I0916 04:32:43.383018 2672 scope.go:117] "RemoveContainer" containerID="a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437" Sep 16 04:32:43.383223 containerd[1535]: time="2025-09-16T04:32:43.383187790Z" level=error msg="ContainerStatus for \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\": not found" Sep 16 04:32:43.383328 kubelet[2672]: E0916 04:32:43.383309 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\": not found" containerID="a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437" Sep 16 04:32:43.383366 kubelet[2672]: I0916 04:32:43.383334 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437"} err="failed to get container status \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\": rpc error: code = NotFound desc = an error occurred when try to find container \"a987e2c123a8d4b3805d82d070a05c51d34fd11d69163399b9e91b2575977437\": not found" Sep 16 04:32:43.383366 kubelet[2672]: I0916 04:32:43.383351 2672 scope.go:117] "RemoveContainer" containerID="befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d" Sep 16 04:32:43.383515 containerd[1535]: time="2025-09-16T04:32:43.383489024Z" level=error msg="ContainerStatus for \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\": not found" Sep 16 04:32:43.383629 kubelet[2672]: E0916 04:32:43.383610 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\": not found" containerID="befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d" Sep 16 04:32:43.383668 kubelet[2672]: I0916 04:32:43.383633 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d"} err="failed to get container status \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\": rpc error: code = NotFound desc = an error occurred when try to find container \"befe695a4ab6555f28b5959460782037f46db44a7e1941321309fbcee9eec98d\": not found" Sep 16 04:32:43.383668 kubelet[2672]: I0916 04:32:43.383649 2672 scope.go:117] "RemoveContainer" containerID="3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221" Sep 16 04:32:43.385065 containerd[1535]: time="2025-09-16T04:32:43.385011234Z" level=info msg="RemoveContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\"" Sep 16 04:32:43.387680 containerd[1535]: time="2025-09-16T04:32:43.387653744Z" level=info msg="RemoveContainer for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" returns successfully" Sep 16 04:32:43.387937 kubelet[2672]: I0916 04:32:43.387866 2672 scope.go:117] "RemoveContainer" containerID="3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221" Sep 16 04:32:43.388162 containerd[1535]: time="2025-09-16T04:32:43.388132614Z" level=error msg="ContainerStatus for \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\": not found" Sep 16 04:32:43.388334 kubelet[2672]: E0916 04:32:43.388278 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\": not found" containerID="3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221" Sep 16 04:32:43.388375 kubelet[2672]: I0916 04:32:43.388340 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221"} err="failed to get container status \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\": rpc error: code = NotFound desc = an error occurred when try to find container \"3223cf85a7075258de793ff7982bd5d53418b994579fd61f8302aeaa0256b221\": not found" Sep 16 04:32:43.519540 systemd[1]: var-lib-kubelet-pods-72cdc903\x2d14e7\x2d4d78\x2d9259\x2d398f38492d72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklblc.mount: Deactivated successfully. Sep 16 04:32:43.519644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5acc5ac80f1773e5dacdfa6c3d4d0515bc9bdb61093026e99923c58b24f50b2d-shm.mount: Deactivated successfully. Sep 16 04:32:43.519697 systemd[1]: var-lib-kubelet-pods-8c0c8707\x2d79a9\x2d4174\x2d8420\x2d7251c2494f83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbp2bb.mount: Deactivated successfully. Sep 16 04:32:43.519750 systemd[1]: var-lib-kubelet-pods-8c0c8707\x2d79a9\x2d4174\x2d8420\x2d7251c2494f83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:32:43.519800 systemd[1]: var-lib-kubelet-pods-8c0c8707\x2d79a9\x2d4174\x2d8420\x2d7251c2494f83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:32:44.424895 sshd[4261]: Connection closed by 10.0.0.1 port 59408 Sep 16 04:32:44.425458 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:44.434095 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:59408.service: Deactivated successfully. Sep 16 04:32:44.435740 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:32:44.437047 systemd[1]: session-22.scope: Consumed 1.456s CPU time, 23.6M memory peak. Sep 16 04:32:44.437482 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:32:44.440049 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:59420.service - OpenSSH per-connection server daemon (10.0.0.1:59420). Sep 16 04:32:44.440592 systemd-logind[1521]: Removed session 22. Sep 16 04:32:44.504549 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 59420 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:44.505850 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:44.510072 systemd-logind[1521]: New session 23 of user core. Sep 16 04:32:44.516157 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:32:44.558864 containerd[1535]: time="2025-09-16T04:32:44.558788410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" id:\"c7b8dd33f968d30746313142669d78fd646704a71ffc8aa947b5d9e48fed97a3\" pid:2921 exit_status:137 exited_at:{seconds:1757997162 nanos:542786767}" Sep 16 04:32:44.652019 kubelet[2672]: I0916 04:32:44.650924 2672 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:32:44Z","lastTransitionTime":"2025-09-16T04:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:32:45.107835 kubelet[2672]: I0916 04:32:45.107566 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cdc903-14e7-4d78-9259-398f38492d72" path="/var/lib/kubelet/pods/72cdc903-14e7-4d78-9259-398f38492d72/volumes" Sep 16 04:32:45.108086 kubelet[2672]: I0916 04:32:45.107979 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0c8707-79a9-4174-8420-7251c2494f83" path="/var/lib/kubelet/pods/8c0c8707-79a9-4174-8420-7251c2494f83/volumes" Sep 16 04:32:45.615630 sshd[4418]: Connection closed by 10.0.0.1 port 59420 Sep 16 04:32:45.616090 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:45.638289 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:59420.service: Deactivated successfully. Sep 16 04:32:45.642976 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:32:45.644226 systemd[1]: session-23.scope: Consumed 1.015s CPU time, 26.2M memory peak. Sep 16 04:32:45.645625 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:32:45.647033 kubelet[2672]: I0916 04:32:45.646549 2672 memory_manager.go:355] "RemoveStaleState removing state" podUID="72cdc903-14e7-4d78-9259-398f38492d72" containerName="cilium-operator" Sep 16 04:32:45.647741 kubelet[2672]: I0916 04:32:45.647132 2672 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c0c8707-79a9-4174-8420-7251c2494f83" containerName="cilium-agent" Sep 16 04:32:45.651798 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:59422.service - OpenSSH per-connection server daemon (10.0.0.1:59422). Sep 16 04:32:45.655323 systemd-logind[1521]: Removed session 23. Sep 16 04:32:45.665679 systemd[1]: Created slice kubepods-burstable-pod8a855ce0_1495_466e_8269_4240495b3479.slice - libcontainer container kubepods-burstable-pod8a855ce0_1495_466e_8269_4240495b3479.slice. Sep 16 04:32:45.707524 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 59422 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:45.708647 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:45.712376 systemd-logind[1521]: New session 24 of user core. Sep 16 04:32:45.722199 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:32:45.761310 kubelet[2672]: I0916 04:32:45.761265 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-lib-modules\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761310 kubelet[2672]: I0916 04:32:45.761307 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-host-proc-sys-net\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761334 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-xtables-lock\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761377 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a855ce0-1495-466e-8269-4240495b3479-cilium-ipsec-secrets\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761416 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-bpf-maps\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761436 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a855ce0-1495-466e-8269-4240495b3479-cilium-config-path\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761454 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-cni-path\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761698 kubelet[2672]: I0916 04:32:45.761477 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a855ce0-1495-466e-8269-4240495b3479-clustermesh-secrets\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761499 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a855ce0-1495-466e-8269-4240495b3479-hubble-tls\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761533 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-cilium-run\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761548 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-hostproc\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761564 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-host-proc-sys-kernel\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761578 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-cilium-cgroup\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761824 kubelet[2672]: I0916 04:32:45.761591 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a855ce0-1495-466e-8269-4240495b3479-etc-cni-netd\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.761934 kubelet[2672]: I0916 04:32:45.761610 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkks\" (UniqueName: \"kubernetes.io/projected/8a855ce0-1495-466e-8269-4240495b3479-kube-api-access-2nkks\") pod \"cilium-2kzmb\" (UID: \"8a855ce0-1495-466e-8269-4240495b3479\") " pod="kube-system/cilium-2kzmb" Sep 16 04:32:45.772733 sshd[4433]: Connection closed by 10.0.0.1 port 59422 Sep 16 04:32:45.773180 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:45.785117 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:59422.service: Deactivated successfully. Sep 16 04:32:45.786672 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:32:45.787810 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:32:45.789534 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:59430.service - OpenSSH per-connection server daemon (10.0.0.1:59430). Sep 16 04:32:45.790751 systemd-logind[1521]: Removed session 24. Sep 16 04:32:45.840770 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 59430 ssh2: RSA SHA256:UjijsmXvpGlRsfqUQE5UeTvJUwF4O48LgTuQN4JDfoQ Sep 16 04:32:45.841912 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:32:45.847549 systemd-logind[1521]: New session 25 of user core. Sep 16 04:32:45.863189 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:32:45.971330 kubelet[2672]: E0916 04:32:45.970874 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:45.972403 containerd[1535]: time="2025-09-16T04:32:45.971676133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2kzmb,Uid:8a855ce0-1495-466e-8269-4240495b3479,Namespace:kube-system,Attempt:0,}" Sep 16 04:32:45.996531 containerd[1535]: time="2025-09-16T04:32:45.996469752Z" level=info msg="connecting to shim 4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:32:46.020193 systemd[1]: Started cri-containerd-4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641.scope - libcontainer container 4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641. Sep 16 04:32:46.042711 containerd[1535]: time="2025-09-16T04:32:46.042649530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2kzmb,Uid:8a855ce0-1495-466e-8269-4240495b3479,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\"" Sep 16 04:32:46.043520 kubelet[2672]: E0916 04:32:46.043494 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:46.046382 containerd[1535]: time="2025-09-16T04:32:46.046340511Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:32:46.052162 containerd[1535]: time="2025-09-16T04:32:46.052123859Z" level=info msg="Container d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:46.058325 containerd[1535]: time="2025-09-16T04:32:46.058284721Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\"" Sep 16 04:32:46.058777 containerd[1535]: time="2025-09-16T04:32:46.058750754Z" level=info msg="StartContainer for \"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\"" Sep 16 04:32:46.059590 containerd[1535]: time="2025-09-16T04:32:46.059540501Z" level=info msg="connecting to shim d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" protocol=ttrpc version=3 Sep 16 04:32:46.080153 systemd[1]: Started cri-containerd-d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56.scope - libcontainer container d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56. Sep 16 04:32:46.106245 containerd[1535]: time="2025-09-16T04:32:46.106196117Z" level=info msg="StartContainer for \"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\" returns successfully" Sep 16 04:32:46.114528 systemd[1]: cri-containerd-d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56.scope: Deactivated successfully. Sep 16 04:32:46.115929 containerd[1535]: time="2025-09-16T04:32:46.115890802Z" level=info msg="received exit event container_id:\"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\" id:\"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\" pid:4512 exited_at:{seconds:1757997166 nanos:115603647}" Sep 16 04:32:46.116132 containerd[1535]: time="2025-09-16T04:32:46.116106559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\" id:\"d4c6281a80c7ec65de26694369b3c503446712fa763783b5d4de25a74b50ee56\" pid:4512 exited_at:{seconds:1757997166 nanos:115603647}" Sep 16 04:32:46.347943 kubelet[2672]: E0916 04:32:46.347916 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:46.350707 containerd[1535]: time="2025-09-16T04:32:46.350671057Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:32:46.357052 containerd[1535]: time="2025-09-16T04:32:46.356527884Z" level=info msg="Container 05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:46.361467 containerd[1535]: time="2025-09-16T04:32:46.361419486Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\"" Sep 16 04:32:46.362080 containerd[1535]: time="2025-09-16T04:32:46.361952957Z" level=info msg="StartContainer for \"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\"" Sep 16 04:32:46.362949 containerd[1535]: time="2025-09-16T04:32:46.362921862Z" level=info msg="connecting to shim 05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" protocol=ttrpc version=3 Sep 16 04:32:46.395215 systemd[1]: Started cri-containerd-05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4.scope - libcontainer container 05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4. Sep 16 04:32:46.420424 containerd[1535]: time="2025-09-16T04:32:46.420377106Z" level=info msg="StartContainer for \"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\" returns successfully" Sep 16 04:32:46.428053 systemd[1]: cri-containerd-05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4.scope: Deactivated successfully. Sep 16 04:32:46.429325 containerd[1535]: time="2025-09-16T04:32:46.429295243Z" level=info msg="received exit event container_id:\"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\" id:\"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\" pid:4560 exited_at:{seconds:1757997166 nanos:428342979}" Sep 16 04:32:46.429621 containerd[1535]: time="2025-09-16T04:32:46.429580119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\" id:\"05f943e9490b79bb79c7934fc8cfa5f6d0a478887e759868425d36b55aa415b4\" pid:4560 exited_at:{seconds:1757997166 nanos:428342979}" Sep 16 04:32:47.351978 kubelet[2672]: E0916 04:32:47.351946 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:47.354500 containerd[1535]: time="2025-09-16T04:32:47.354467449Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:32:47.400666 containerd[1535]: time="2025-09-16T04:32:47.400622921Z" level=info msg="Container 22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:47.407141 containerd[1535]: time="2025-09-16T04:32:47.407102024Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\"" Sep 16 04:32:47.408014 containerd[1535]: time="2025-09-16T04:32:47.407970251Z" level=info msg="StartContainer for \"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\"" Sep 16 04:32:47.409705 containerd[1535]: time="2025-09-16T04:32:47.409672706Z" level=info msg="connecting to shim 22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" protocol=ttrpc version=3 Sep 16 04:32:47.429139 systemd[1]: Started cri-containerd-22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b.scope - libcontainer container 22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b. Sep 16 04:32:47.488870 systemd[1]: cri-containerd-22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b.scope: Deactivated successfully. Sep 16 04:32:47.489225 systemd[1]: cri-containerd-22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b.scope: Consumed 28ms CPU time, 4.4M memory peak, 1.6M read from disk. Sep 16 04:32:47.489672 containerd[1535]: time="2025-09-16T04:32:47.489639752Z" level=info msg="StartContainer for \"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\" returns successfully" Sep 16 04:32:47.491610 containerd[1535]: time="2025-09-16T04:32:47.491574803Z" level=info msg="received exit event container_id:\"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\" id:\"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\" pid:4604 exited_at:{seconds:1757997167 nanos:489924508}" Sep 16 04:32:47.491963 containerd[1535]: time="2025-09-16T04:32:47.491931598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\" id:\"22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b\" pid:4604 exited_at:{seconds:1757997167 nanos:489924508}" Sep 16 04:32:47.512311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22299a0283a5e5de8bc36a94c514aee0fc330b5c2f289bfe8cf0442074e36a4b-rootfs.mount: Deactivated successfully. Sep 16 04:32:48.153692 kubelet[2672]: E0916 04:32:48.153651 2672 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:32:48.359046 kubelet[2672]: E0916 04:32:48.359010 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:48.363159 containerd[1535]: time="2025-09-16T04:32:48.363121877Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:32:48.369410 containerd[1535]: time="2025-09-16T04:32:48.369373830Z" level=info msg="Container bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:48.377501 containerd[1535]: time="2025-09-16T04:32:48.377450238Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\"" Sep 16 04:32:48.378680 containerd[1535]: time="2025-09-16T04:32:48.378622941Z" level=info msg="StartContainer for \"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\"" Sep 16 04:32:48.380028 containerd[1535]: time="2025-09-16T04:32:48.379979362Z" level=info msg="connecting to shim bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" protocol=ttrpc version=3 Sep 16 04:32:48.412185 systemd[1]: Started cri-containerd-bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4.scope - libcontainer container bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4. Sep 16 04:32:48.435261 systemd[1]: cri-containerd-bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4.scope: Deactivated successfully. Sep 16 04:32:48.438297 containerd[1535]: time="2025-09-16T04:32:48.438240071Z" level=info msg="received exit event container_id:\"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\" id:\"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\" pid:4645 exited_at:{seconds:1757997168 nanos:438020794}" Sep 16 04:32:48.438376 containerd[1535]: time="2025-09-16T04:32:48.438259311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\" id:\"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\" pid:4645 exited_at:{seconds:1757997168 nanos:438020794}" Sep 16 04:32:48.439064 containerd[1535]: time="2025-09-16T04:32:48.439030260Z" level=info msg="StartContainer for \"bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4\" returns successfully" Sep 16 04:32:48.457448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfc2f339102b1d40e91ae53f82255df0da00aabb05d1d90e1437168a68eabe4-rootfs.mount: Deactivated successfully. Sep 16 04:32:49.364649 kubelet[2672]: E0916 04:32:49.364612 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:49.367730 containerd[1535]: time="2025-09-16T04:32:49.367403002Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:32:49.399347 containerd[1535]: time="2025-09-16T04:32:49.397367493Z" level=info msg="Container 89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:32:49.407343 containerd[1535]: time="2025-09-16T04:32:49.407300404Z" level=info msg="CreateContainer within sandbox \"4ae1e4fa7f1c94412acbce73942ee705fca248949648a3d9ee6bdc5d69327641\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\"" Sep 16 04:32:49.409475 containerd[1535]: time="2025-09-16T04:32:49.409436417Z" level=info msg="StartContainer for \"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\"" Sep 16 04:32:49.412839 containerd[1535]: time="2025-09-16T04:32:49.412803093Z" level=info msg="connecting to shim 89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce" address="unix:///run/containerd/s/87d04914bcd1d72b2575cdad5259e4f4969461d08bf52eaf69f70215de8a0ef5" protocol=ttrpc version=3 Sep 16 04:32:49.438175 systemd[1]: Started cri-containerd-89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce.scope - libcontainer container 89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce. Sep 16 04:32:49.468256 containerd[1535]: time="2025-09-16T04:32:49.468222015Z" level=info msg="StartContainer for \"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" returns successfully" Sep 16 04:32:49.523032 containerd[1535]: time="2025-09-16T04:32:49.522978705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" id:\"8aebba6d3b93dd48f920ad90c7cc17d6c0df9cf322bd91cd660bfbc73ee8ec48\" pid:4715 exited_at:{seconds:1757997169 nanos:522643909}" Sep 16 04:32:49.737014 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 16 04:32:50.103304 kubelet[2672]: E0916 04:32:50.103263 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:50.374887 kubelet[2672]: E0916 04:32:50.374787 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:51.971920 kubelet[2672]: E0916 04:32:51.971572 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:52.102396 kubelet[2672]: E0916 04:32:52.102346 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:52.210411 containerd[1535]: time="2025-09-16T04:32:52.210368281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" id:\"effc7a797b2a118145bf80b37b804087835e63b494839db6e03714da54b76ca2\" pid:5121 exit_status:1 exited_at:{seconds:1757997172 nanos:209884566}" Sep 16 04:32:52.498061 systemd-networkd[1444]: lxc_health: Link UP Sep 16 04:32:52.499815 systemd-networkd[1444]: lxc_health: Gained carrier Sep 16 04:32:53.972771 kubelet[2672]: E0916 04:32:53.972700 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:54.000944 kubelet[2672]: I0916 04:32:54.000575 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2kzmb" podStartSLOduration=9.000558056 podStartE2EDuration="9.000558056s" podCreationTimestamp="2025-09-16 04:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:32:50.388483484 +0000 UTC m=+77.370764463" watchObservedRunningTime="2025-09-16 04:32:54.000558056 +0000 UTC m=+80.982839035" Sep 16 04:32:54.326473 containerd[1535]: time="2025-09-16T04:32:54.326412019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" id:\"9c28b9f484dcd046e4271dae6c41c75c93d672282a764b7d1d45fc82d52a3e9b\" pid:5250 exited_at:{seconds:1757997174 nanos:326069942}" Sep 16 04:32:54.381643 kubelet[2672]: E0916 04:32:54.381615 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:54.483156 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 16 04:32:55.384077 kubelet[2672]: E0916 04:32:55.384023 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:56.103025 kubelet[2672]: E0916 04:32:56.102910 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:32:56.437975 containerd[1535]: time="2025-09-16T04:32:56.437776461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" id:\"9e00600571f2fe06d09785efa288c8ad1c75232abe0c12a56a9875868ae6b113\" pid:5283 exited_at:{seconds:1757997176 nanos:437126266}" Sep 16 04:32:58.548762 containerd[1535]: time="2025-09-16T04:32:58.548709209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89f2bf361192d93899ebed26056639aa816ad916a33b144766ff51f1567bbbce\" id:\"f3fe1947fee1fbf9cee0cbdd456ba10aac5b49979c0f988c7afa5661a8ef86b5\" pid:5307 exited_at:{seconds:1757997178 nanos:547925053}" Sep 16 04:32:58.557680 sshd[4446]: Connection closed by 10.0.0.1 port 59430 Sep 16 04:32:58.557734 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 16 04:32:58.561284 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:59430.service: Deactivated successfully. Sep 16 04:32:58.564215 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:32:58.565047 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:32:58.566695 systemd-logind[1521]: Removed session 25.