Sep 8 23:48:46.776218 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:48:46.776244 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Sep 8 22:10:01 -00 2025 Sep 8 23:48:46.776255 kernel: KASLR enabled Sep 8 23:48:46.776260 kernel: efi: EFI v2.7 by EDK II Sep 8 23:48:46.776266 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 8 23:48:46.776271 kernel: random: crng init done Sep 8 23:48:46.776278 kernel: secureboot: Secure boot disabled Sep 8 23:48:46.776284 kernel: ACPI: Early table checksum verification disabled Sep 8 23:48:46.776290 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 8 23:48:46.776297 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:48:46.776303 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776309 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776326 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776333 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776341 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776349 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776355 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776362 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776368 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:48:46.776374 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:48:46.776380 kernel: ACPI: Use ACPI SPCR as default console: No Sep 8 23:48:46.776387 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:48:46.776393 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 8 23:48:46.776399 kernel: Zone ranges: Sep 8 23:48:46.776405 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:48:46.776413 kernel: DMA32 empty Sep 8 23:48:46.776419 kernel: Normal empty Sep 8 23:48:46.776425 kernel: Device empty Sep 8 23:48:46.776431 kernel: Movable zone start for each node Sep 8 23:48:46.776437 kernel: Early memory node ranges Sep 8 23:48:46.776443 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 8 23:48:46.776449 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 8 23:48:46.776455 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 8 23:48:46.776461 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 8 23:48:46.776485 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 8 23:48:46.776491 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 8 23:48:46.776498 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 8 23:48:46.776506 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 8 23:48:46.776512 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 8 23:48:46.776518 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:48:46.776527 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:48:46.776533 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:48:46.776540 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:48:46.776547 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:48:46.776554 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:48:46.776560 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 8 23:48:46.776567 kernel: psci: probing for conduit method from ACPI. Sep 8 23:48:46.776573 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:48:46.776580 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:48:46.776587 kernel: psci: Trusted OS migration not required Sep 8 23:48:46.776594 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:48:46.776600 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:48:46.776607 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 8 23:48:46.776615 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 8 23:48:46.776622 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:48:46.776628 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:48:46.776635 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:48:46.776641 kernel: CPU features: detected: Spectre-v4 Sep 8 23:48:46.776648 kernel: CPU features: detected: Spectre-BHB Sep 8 23:48:46.776655 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:48:46.776661 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:48:46.776668 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:48:46.776674 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:48:46.776681 kernel: alternatives: applying boot alternatives Sep 8 23:48:46.776688 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=56d35272d6799b20efe64172ddb761aa9d752bf4ee92cd36e6693ce5e7a3b83d Sep 8 23:48:46.776697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:48:46.776703 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:48:46.776710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:48:46.776716 kernel: Fallback order for Node 0: 0 Sep 8 23:48:46.776723 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 8 23:48:46.776729 kernel: Policy zone: DMA Sep 8 23:48:46.776736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:48:46.776742 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 8 23:48:46.776749 kernel: software IO TLB: area num 4. Sep 8 23:48:46.776755 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 8 23:48:46.776761 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 8 23:48:46.776769 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:48:46.776776 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:48:46.776783 kernel: rcu: RCU event tracing is enabled. Sep 8 23:48:46.776790 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:48:46.776796 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:48:46.776802 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:48:46.776809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:48:46.776815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:48:46.776822 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:48:46.776828 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:48:46.776835 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:48:46.776842 kernel: GICv3: 256 SPIs implemented Sep 8 23:48:46.776849 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:48:46.776929 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:48:46.776939 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:48:46.776946 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 8 23:48:46.776952 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:48:46.776959 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:48:46.776965 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:48:46.776972 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:48:46.776978 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 8 23:48:46.776985 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 8 23:48:46.776991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:48:46.777001 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:48:46.777008 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:48:46.777015 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:48:46.777021 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:48:46.777027 kernel: arm-pv: using stolen time PV Sep 8 23:48:46.777034 kernel: Console: colour dummy device 80x25 Sep 8 23:48:46.777041 kernel: ACPI: Core revision 20240827 Sep 8 23:48:46.777047 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:48:46.777054 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:48:46.777061 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 8 23:48:46.777069 kernel: landlock: Up and running. Sep 8 23:48:46.777076 kernel: SELinux: Initializing. Sep 8 23:48:46.777083 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:48:46.777090 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:48:46.777097 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:48:46.777104 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:48:46.777111 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 8 23:48:46.777117 kernel: Remapping and enabling EFI services. Sep 8 23:48:46.777124 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:48:46.777138 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:48:46.777145 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:48:46.777153 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 8 23:48:46.777161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:48:46.777168 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:48:46.777175 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:48:46.777182 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:48:46.777189 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 8 23:48:46.777197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:48:46.777203 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:48:46.777211 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:48:46.777218 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:48:46.777224 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 8 23:48:46.777231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:48:46.777238 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:48:46.777245 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:48:46.777252 kernel: SMP: Total of 4 processors activated. Sep 8 23:48:46.777260 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:48:46.777267 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:48:46.777274 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:48:46.777281 kernel: CPU features: detected: Common not Private translations Sep 8 23:48:46.777288 kernel: CPU features: detected: CRC32 instructions Sep 8 23:48:46.777295 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:48:46.777302 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:48:46.777308 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:48:46.777321 kernel: CPU features: detected: Privileged Access Never Sep 8 23:48:46.778080 kernel: CPU features: detected: RAS Extension Support Sep 8 23:48:46.778101 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:48:46.778109 kernel: alternatives: applying system-wide alternatives Sep 8 23:48:46.778117 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 8 23:48:46.778125 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 8 23:48:46.778132 kernel: devtmpfs: initialized Sep 8 23:48:46.778140 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:48:46.778147 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:48:46.778154 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:48:46.778169 kernel: 0 pages in range for non-PLT usage Sep 8 23:48:46.778176 kernel: 508576 pages in range for PLT usage Sep 8 23:48:46.778184 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:48:46.778191 kernel: SMBIOS 3.0.0 present. Sep 8 23:48:46.778198 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:48:46.778205 kernel: DMI: Memory slots populated: 1/1 Sep 8 23:48:46.778213 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:48:46.778220 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:48:46.778227 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:48:46.778235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:48:46.778242 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:48:46.778249 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 8 23:48:46.778257 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:48:46.778264 kernel: cpuidle: using governor menu Sep 8 23:48:46.778271 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:48:46.778278 kernel: ASID allocator initialised with 32768 entries Sep 8 23:48:46.778284 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:48:46.778291 kernel: Serial: AMBA PL011 UART driver Sep 8 23:48:46.778300 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:48:46.778307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:48:46.778323 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:48:46.778331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:48:46.778338 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:48:46.778345 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:48:46.778352 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:48:46.778359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:48:46.778366 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:48:46.778375 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:48:46.778382 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:48:46.778389 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:48:46.778396 kernel: ACPI: Interpreter enabled Sep 8 23:48:46.778403 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:48:46.778410 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:48:46.778417 kernel: ACPI: CPU0 has been hot-added Sep 8 23:48:46.778424 kernel: ACPI: CPU1 has been hot-added Sep 8 23:48:46.778431 kernel: ACPI: CPU2 has been hot-added Sep 8 23:48:46.778438 kernel: ACPI: CPU3 has been hot-added Sep 8 23:48:46.778447 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:48:46.778454 kernel: printk: legacy console [ttyAMA0] enabled Sep 8 23:48:46.778461 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:48:46.778627 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:48:46.778692 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:48:46.778750 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:48:46.778806 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:48:46.778939 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:48:46.778952 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:48:46.778960 kernel: PCI host bridge to bus 0000:00 Sep 8 23:48:46.779041 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:48:46.779098 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:48:46.779152 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:48:46.779990 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:48:46.780103 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 8 23:48:46.780178 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 8 23:48:46.780258 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 8 23:48:46.780335 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 8 23:48:46.780396 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:48:46.780455 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 8 23:48:46.780540 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 8 23:48:46.780606 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 8 23:48:46.780662 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:48:46.780723 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:48:46.780788 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:48:46.780800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:48:46.780808 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:48:46.780815 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:48:46.780824 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:48:46.780832 kernel: iommu: Default domain type: Translated Sep 8 23:48:46.780840 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:48:46.780847 kernel: efivars: Registered efivars operations Sep 8 23:48:46.780857 kernel: vgaarb: loaded Sep 8 23:48:46.780864 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:48:46.780871 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:48:46.780882 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:48:46.780892 kernel: pnp: PnP ACPI init Sep 8 23:48:46.780971 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:48:46.780981 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:48:46.780988 kernel: NET: Registered PF_INET protocol family Sep 8 23:48:46.780996 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:48:46.781003 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:48:46.781011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:48:46.781019 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:48:46.781026 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:48:46.781035 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:48:46.781042 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:48:46.781050 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:48:46.781057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:48:46.781064 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:48:46.781072 kernel: kvm [1]: HYP mode not available Sep 8 23:48:46.781079 kernel: Initialise system trusted keyrings Sep 8 23:48:46.781087 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:48:46.781101 kernel: Key type asymmetric registered Sep 8 23:48:46.781109 kernel: Asymmetric key parser 'x509' registered Sep 8 23:48:46.781117 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 8 23:48:46.781124 kernel: io scheduler mq-deadline registered Sep 8 23:48:46.781132 kernel: io scheduler kyber registered Sep 8 23:48:46.781139 kernel: io scheduler bfq registered Sep 8 23:48:46.781146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:48:46.781154 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:48:46.781161 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:48:46.781223 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:48:46.781234 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:48:46.781242 kernel: thunder_xcv, ver 1.0 Sep 8 23:48:46.781249 kernel: thunder_bgx, ver 1.0 Sep 8 23:48:46.781257 kernel: nicpf, ver 1.0 Sep 8 23:48:46.781264 kernel: nicvf, ver 1.0 Sep 8 23:48:46.781418 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:48:46.781507 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:48:46 UTC (1757375326) Sep 8 23:48:46.781518 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:48:46.781530 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 8 23:48:46.781537 kernel: watchdog: NMI not fully supported Sep 8 23:48:46.781544 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:48:46.781551 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:48:46.781559 kernel: Segment Routing with IPv6 Sep 8 23:48:46.781566 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:48:46.781573 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:48:46.781580 kernel: Key type dns_resolver registered Sep 8 23:48:46.781588 kernel: registered taskstats version 1 Sep 8 23:48:46.781595 kernel: Loading compiled-in X.509 certificates Sep 8 23:48:46.781604 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: a394eaa34ffd7f1371a823c439a0662c32ae9397' Sep 8 23:48:46.781611 kernel: Demotion targets for Node 0: null Sep 8 23:48:46.781618 kernel: Key type .fscrypt registered Sep 8 23:48:46.781625 kernel: Key type fscrypt-provisioning registered Sep 8 23:48:46.781632 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:48:46.781639 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:48:46.781647 kernel: ima: No architecture policies found Sep 8 23:48:46.781654 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:48:46.781663 kernel: clk: Disabling unused clocks Sep 8 23:48:46.781670 kernel: PM: genpd: Disabling unused power domains Sep 8 23:48:46.781677 kernel: Warning: unable to open an initial console. Sep 8 23:48:46.781684 kernel: Freeing unused kernel memory: 38912K Sep 8 23:48:46.781691 kernel: Run /init as init process Sep 8 23:48:46.781698 kernel: with arguments: Sep 8 23:48:46.781706 kernel: /init Sep 8 23:48:46.781713 kernel: with environment: Sep 8 23:48:46.781720 kernel: HOME=/ Sep 8 23:48:46.781726 kernel: TERM=linux Sep 8 23:48:46.781735 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:48:46.781743 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:48:46.781754 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:48:46.781762 systemd[1]: Detected virtualization kvm. Sep 8 23:48:46.781770 systemd[1]: Detected architecture arm64. Sep 8 23:48:46.781778 systemd[1]: Running in initrd. Sep 8 23:48:46.781786 systemd[1]: No hostname configured, using default hostname. Sep 8 23:48:46.781796 systemd[1]: Hostname set to . Sep 8 23:48:46.781803 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:48:46.781812 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:48:46.781820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:48:46.781828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:48:46.781836 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:48:46.781844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:48:46.781852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:48:46.781935 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:48:46.781945 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:48:46.781953 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:48:46.781960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:48:46.781968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:48:46.781975 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:48:46.781983 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:48:46.781992 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:48:46.782000 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:48:46.782008 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:48:46.782016 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:48:46.782025 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:48:46.782032 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:48:46.782040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:48:46.782048 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:48:46.782057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:48:46.782065 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:48:46.782073 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:48:46.782082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:48:46.782089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:48:46.782098 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 8 23:48:46.782106 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:48:46.782114 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:48:46.782134 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:48:46.782144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:48:46.782153 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:48:46.782161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:48:46.782169 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:48:46.782178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:48:46.782210 systemd-journald[244]: Collecting audit messages is disabled. Sep 8 23:48:46.782230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:48:46.782238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:48:46.782249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:48:46.782259 systemd-journald[244]: Journal started Sep 8 23:48:46.782278 systemd-journald[244]: Runtime Journal (/run/log/journal/9c2233fb36ff40cda00c2a0acc4bd096) is 6M, max 48.5M, 42.4M free. Sep 8 23:48:46.768855 systemd-modules-load[245]: Inserted module 'overlay' Sep 8 23:48:46.784753 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 8 23:48:46.785623 kernel: Bridge firewalling registered Sep 8 23:48:46.789491 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:48:46.805597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:48:46.806636 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:48:46.810155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:48:46.811517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:48:46.815417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:48:46.818791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:48:46.822094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:48:46.823412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:48:46.824845 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 8 23:48:46.826926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:48:46.828092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:48:46.838510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:48:46.848757 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=56d35272d6799b20efe64172ddb761aa9d752bf4ee92cd36e6693ce5e7a3b83d Sep 8 23:48:46.885010 systemd-resolved[288]: Positive Trust Anchors: Sep 8 23:48:46.885031 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:48:46.885062 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:48:46.890130 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 8 23:48:46.891157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:48:46.894604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:48:46.925556 kernel: SCSI subsystem initialized Sep 8 23:48:46.930493 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:48:46.937488 kernel: iscsi: registered transport (tcp) Sep 8 23:48:46.950524 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:48:46.950575 kernel: QLogic iSCSI HBA Driver Sep 8 23:48:46.967852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:48:46.984527 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:48:46.986399 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:48:47.037067 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:48:47.039153 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:48:47.102505 kernel: raid6: neonx8 gen() 15635 MB/s Sep 8 23:48:47.119507 kernel: raid6: neonx4 gen() 15729 MB/s Sep 8 23:48:47.136496 kernel: raid6: neonx2 gen() 13186 MB/s Sep 8 23:48:47.153494 kernel: raid6: neonx1 gen() 10406 MB/s Sep 8 23:48:47.170482 kernel: raid6: int64x8 gen() 6889 MB/s Sep 8 23:48:47.187486 kernel: raid6: int64x4 gen() 7338 MB/s Sep 8 23:48:47.204493 kernel: raid6: int64x2 gen() 6095 MB/s Sep 8 23:48:47.221489 kernel: raid6: int64x1 gen() 5047 MB/s Sep 8 23:48:47.221523 kernel: raid6: using algorithm neonx4 gen() 15729 MB/s Sep 8 23:48:47.238488 kernel: raid6: .... xor() 12331 MB/s, rmw enabled Sep 8 23:48:47.238513 kernel: raid6: using neon recovery algorithm Sep 8 23:48:47.243748 kernel: xor: measuring software checksum speed Sep 8 23:48:47.243776 kernel: 8regs : 21031 MB/sec Sep 8 23:48:47.244853 kernel: 32regs : 21676 MB/sec Sep 8 23:48:47.244866 kernel: arm64_neon : 24870 MB/sec Sep 8 23:48:47.244875 kernel: xor: using function: arm64_neon (24870 MB/sec) Sep 8 23:48:47.297505 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:48:47.303549 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:48:47.306191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:48:47.346712 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 8 23:48:47.352053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:48:47.353997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:48:47.377016 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Sep 8 23:48:47.403049 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:48:47.405396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:48:47.464389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:48:47.467391 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:48:47.526490 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:48:47.528804 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:48:47.531055 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:48:47.531095 kernel: GPT:9289727 != 19775487 Sep 8 23:48:47.531105 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:48:47.531114 kernel: GPT:9289727 != 19775487 Sep 8 23:48:47.531905 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:48:47.543450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:48:47.544249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:48:47.544514 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:48:47.548615 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:48:47.551685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:48:47.576849 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:48:47.579498 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:48:47.581695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:48:47.589418 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:48:47.601798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:48:47.607703 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:48:47.608756 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:48:47.610737 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:48:47.613376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:48:47.615384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:48:47.618037 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:48:47.619794 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:48:47.639226 disk-uuid[597]: Primary Header is updated. Sep 8 23:48:47.639226 disk-uuid[597]: Secondary Entries is updated. Sep 8 23:48:47.639226 disk-uuid[597]: Secondary Header is updated. Sep 8 23:48:47.643000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:48:47.647547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:48:47.649490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:48:48.650546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:48:48.651072 disk-uuid[600]: The operation has completed successfully. Sep 8 23:48:48.673366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:48:48.673491 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:48:48.703227 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:48:48.716352 sh[617]: Success Sep 8 23:48:48.728571 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:48:48.728641 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:48:48.729596 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 8 23:48:48.736503 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 8 23:48:48.759758 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:48:48.762605 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:48:48.776783 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:48:48.782519 kernel: BTRFS: device fsid b6aa4556-53d3-40d0-8c29-11204db15da4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (629) Sep 8 23:48:48.784481 kernel: BTRFS info (device dm-0): first mount of filesystem b6aa4556-53d3-40d0-8c29-11204db15da4 Sep 8 23:48:48.784501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:48:48.788872 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:48:48.788887 kernel: BTRFS info (device dm-0): enabling free space tree Sep 8 23:48:48.789837 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:48:48.791080 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:48:48.792605 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:48:48.793393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:48:48.794923 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:48:48.818501 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (661) Sep 8 23:48:48.818546 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:48:48.819495 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:48:48.822061 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:48:48.822099 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:48:48.827487 kernel: BTRFS info (device vda6): last unmount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:48:48.828507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:48:48.830593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:48:48.892063 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:48:48.896023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:48:48.932412 systemd-networkd[803]: lo: Link UP Sep 8 23:48:48.932424 systemd-networkd[803]: lo: Gained carrier Sep 8 23:48:48.933229 systemd-networkd[803]: Enumeration completed Sep 8 23:48:48.933354 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:48:48.933726 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:48:48.933730 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:48:48.934598 systemd-networkd[803]: eth0: Link UP Sep 8 23:48:48.934687 systemd-networkd[803]: eth0: Gained carrier Sep 8 23:48:48.934697 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:48:48.935180 systemd[1]: Reached target network.target - Network. Sep 8 23:48:48.953774 ignition[705]: Ignition 2.21.0 Sep 8 23:48:48.953792 ignition[705]: Stage: fetch-offline Sep 8 23:48:48.953827 ignition[705]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:48.953835 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:48.956537 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:48:48.954181 ignition[705]: parsed url from cmdline: "" Sep 8 23:48:48.954185 ignition[705]: no config URL provided Sep 8 23:48:48.954191 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:48:48.954202 ignition[705]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:48:48.954232 ignition[705]: op(1): [started] loading QEMU firmware config module Sep 8 23:48:48.954237 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:48:48.959188 ignition[705]: op(1): [finished] loading QEMU firmware config module Sep 8 23:48:49.005220 ignition[705]: parsing config with SHA512: d02231f883ec32dd22bf03b841a1d23893259efa701f0610d0a133e42187c5bbe90c99afe94e360edba5fec3cfc1cc21714db47935170bbf8c764db460713a7b Sep 8 23:48:49.009776 unknown[705]: fetched base config from "system" Sep 8 23:48:49.009789 unknown[705]: fetched user config from "qemu" Sep 8 23:48:49.010297 ignition[705]: fetch-offline: fetch-offline passed Sep 8 23:48:49.012096 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:48:49.010378 ignition[705]: Ignition finished successfully Sep 8 23:48:49.013296 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:48:49.014192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:48:49.045495 ignition[818]: Ignition 2.21.0 Sep 8 23:48:49.045510 ignition[818]: Stage: kargs Sep 8 23:48:49.045651 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:49.045660 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:49.048104 ignition[818]: kargs: kargs passed Sep 8 23:48:49.048164 ignition[818]: Ignition finished successfully Sep 8 23:48:49.050322 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:48:49.052244 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:48:49.080895 ignition[825]: Ignition 2.21.0 Sep 8 23:48:49.080915 ignition[825]: Stage: disks Sep 8 23:48:49.081067 ignition[825]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:49.081076 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:49.082872 ignition[825]: disks: disks passed Sep 8 23:48:49.082952 ignition[825]: Ignition finished successfully Sep 8 23:48:49.085674 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:48:49.086942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:48:49.088358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:48:49.090080 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:48:49.091813 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:48:49.093222 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:48:49.095495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:48:49.125891 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 8 23:48:49.130173 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:48:49.132200 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:48:49.194494 kernel: EXT4-fs (vda9): mounted filesystem 12f0e8f7-98bc-449e-b11f-df07384be687 r/w with ordered data mode. Quota mode: none. Sep 8 23:48:49.195239 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:48:49.196516 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:48:49.198483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:48:49.200060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:48:49.201090 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:48:49.201128 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:48:49.201156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:48:49.214065 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:48:49.216729 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:48:49.220560 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (843) Sep 8 23:48:49.220591 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:48:49.220601 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:48:49.223802 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:48:49.223849 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:48:49.225426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:48:49.252340 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:48:49.256434 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:48:49.260132 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:48:49.263736 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:48:49.331188 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:48:49.333295 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:48:49.334924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:48:49.362516 kernel: BTRFS info (device vda6): last unmount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:48:49.372359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:48:49.381519 ignition[957]: INFO : Ignition 2.21.0 Sep 8 23:48:49.383544 ignition[957]: INFO : Stage: mount Sep 8 23:48:49.383544 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:49.383544 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:49.383544 ignition[957]: INFO : mount: mount passed Sep 8 23:48:49.383544 ignition[957]: INFO : Ignition finished successfully Sep 8 23:48:49.385879 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:48:49.387912 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:48:49.781630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:48:49.783172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:48:49.817425 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Sep 8 23:48:49.817487 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:48:49.817499 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:48:49.820529 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:48:49.820559 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:48:49.821933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:48:49.862218 ignition[986]: INFO : Ignition 2.21.0 Sep 8 23:48:49.864549 ignition[986]: INFO : Stage: files Sep 8 23:48:49.864549 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:49.864549 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:49.867045 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:48:49.867045 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:48:49.867045 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:48:49.870709 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:48:49.870709 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:48:49.870709 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:48:49.870709 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:48:49.870709 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 8 23:48:49.868755 unknown[986]: wrote ssh authorized keys file for user: core Sep 8 23:48:49.924229 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:48:50.198815 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:48:50.198815 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:48:50.198815 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 8 23:48:50.228604 systemd-networkd[803]: eth0: Gained IPv6LL Sep 8 23:48:50.423327 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:48:50.842621 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:48:50.844312 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:48:50.846241 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:48:50.861576 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 8 23:48:51.141796 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:48:51.724915 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:48:51.724915 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:48:51.728656 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:48:51.747239 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:48:51.750999 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:48:51.753774 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:48:51.753774 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:48:51.753774 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:48:51.753774 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:48:51.753774 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:48:51.753774 ignition[986]: INFO : files: files passed Sep 8 23:48:51.753774 ignition[986]: INFO : Ignition finished successfully Sep 8 23:48:51.754427 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:48:51.757723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:48:51.762121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:48:51.776709 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:48:51.778536 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:48:51.778893 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:48:51.782527 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:48:51.784663 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:48:51.786021 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:48:51.787191 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:48:51.788664 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:48:51.791606 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:48:51.827888 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:48:51.828001 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:48:51.829979 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:48:51.833923 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:48:51.835135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:48:51.836044 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:48:51.869537 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:48:51.872100 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:48:51.900161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:48:51.901430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:48:51.903570 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:48:51.905353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:48:51.905522 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:48:51.908099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:48:51.910124 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:48:51.911804 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:48:51.913636 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:48:51.915738 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:48:51.917678 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:48:51.919526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:48:51.921385 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:48:51.923358 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:48:51.925354 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:48:51.927054 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:48:51.928520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:48:51.928660 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:48:51.930819 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:48:51.932518 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:48:51.934396 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:48:51.937533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:48:51.938640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:48:51.938769 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:48:51.941345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:48:51.941489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:48:51.943359 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:48:51.944812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:48:51.945525 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:48:51.946608 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:48:51.947986 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:48:51.949387 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:48:51.949487 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:48:51.951332 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:48:51.951410 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:48:51.952808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:48:51.952930 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:48:51.954736 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:48:51.954839 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:48:51.957267 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:48:51.958763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:48:51.958874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:48:51.961456 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:48:51.965163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:48:51.965292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:48:51.967321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:48:51.967424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:48:51.972384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:48:51.974638 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:48:51.993131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:48:51.997760 ignition[1041]: INFO : Ignition 2.21.0 Sep 8 23:48:51.997760 ignition[1041]: INFO : Stage: umount Sep 8 23:48:51.997760 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:48:51.997760 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:48:52.009839 ignition[1041]: INFO : umount: umount passed Sep 8 23:48:52.009839 ignition[1041]: INFO : Ignition finished successfully Sep 8 23:48:52.006693 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:48:52.006819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:48:52.011169 systemd[1]: Stopped target network.target - Network. Sep 8 23:48:52.012129 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:48:52.012207 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:48:52.013811 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:48:52.013859 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:48:52.015393 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:48:52.015439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:48:52.017191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:48:52.017227 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:48:52.018866 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:48:52.021582 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:48:52.032003 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:48:52.033561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:48:52.041076 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:48:52.041284 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:48:52.042824 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:48:52.048730 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:48:52.049217 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 8 23:48:52.052984 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:48:52.053028 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:48:52.059545 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:48:52.060323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:48:52.060399 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:48:52.062235 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:48:52.062332 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:48:52.064902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:48:52.064947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:48:52.067767 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:48:52.067818 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:48:52.070570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:48:52.073960 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:48:52.074029 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:48:52.083867 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:48:52.083976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:48:52.086245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:48:52.086347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:48:52.089749 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:48:52.090626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:48:52.092035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:48:52.092075 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:48:52.094022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:48:52.094057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:48:52.095996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:48:52.096044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:48:52.098847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:48:52.098896 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:48:52.101345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:48:52.101392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:48:52.105093 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:48:52.106340 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 8 23:48:52.106405 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:48:52.109746 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:48:52.109789 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:48:52.112645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:48:52.112693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:48:52.117094 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 8 23:48:52.117147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:48:52.117181 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:48:52.117478 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:48:52.120612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:48:52.125922 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:48:52.126007 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:48:52.127384 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:48:52.129828 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:48:52.155027 systemd[1]: Switching root. Sep 8 23:48:52.192742 systemd-journald[244]: Journal stopped Sep 8 23:48:53.001752 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 8 23:48:53.001815 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:48:53.001839 kernel: SELinux: policy capability open_perms=1 Sep 8 23:48:53.001848 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:48:53.001858 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:48:53.001872 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:48:53.001883 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:48:53.001897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:48:53.001910 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:48:53.001919 kernel: SELinux: policy capability userspace_initial_context=0 Sep 8 23:48:53.001933 kernel: audit: type=1403 audit(1757375332.401:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:48:53.001962 systemd[1]: Successfully loaded SELinux policy in 64.861ms. Sep 8 23:48:53.001975 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.252ms. Sep 8 23:48:53.001987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:48:53.001999 systemd[1]: Detected virtualization kvm. Sep 8 23:48:53.002010 systemd[1]: Detected architecture arm64. Sep 8 23:48:53.002020 systemd[1]: Detected first boot. Sep 8 23:48:53.002032 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:48:53.002043 zram_generator::config[1089]: No configuration found. Sep 8 23:48:53.002060 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:48:53.002072 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:48:53.002084 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:48:53.002095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:48:53.002105 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:48:53.002115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:48:53.002127 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:48:53.002137 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:48:53.002147 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:48:53.002158 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:48:53.002168 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:48:53.002178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:48:53.002190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:48:53.002200 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:48:53.002210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:48:53.002222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:48:53.002232 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:48:53.002242 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:48:53.002254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:48:53.002265 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:48:53.002275 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:48:53.002285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:48:53.002422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:48:53.002440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:48:53.002452 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:48:53.002475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:48:53.002487 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:48:53.002497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:48:53.002507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:48:53.002517 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:48:53.002527 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:48:53.002537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:48:53.002550 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:48:53.002560 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:48:53.002570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:48:53.002580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:48:53.002590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:48:53.002600 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:48:53.002610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:48:53.002620 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:48:53.002630 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:48:53.002641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:48:53.002651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:48:53.002661 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:48:53.002672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:48:53.002683 systemd[1]: Reached target machines.target - Containers. Sep 8 23:48:53.002694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:48:53.002704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:48:53.002715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:48:53.002726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:48:53.002738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:48:53.002748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:48:53.002758 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:48:53.002769 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:48:53.002779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:48:53.002789 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:48:53.002799 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:48:53.002809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:48:53.002820 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:48:53.002835 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:48:53.002846 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:48:53.002855 kernel: fuse: init (API version 7.41) Sep 8 23:48:53.002865 kernel: loop: module loaded Sep 8 23:48:53.002874 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:48:53.002883 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:48:53.002893 kernel: ACPI: bus type drm_connector registered Sep 8 23:48:53.002904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:48:53.002914 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:48:53.002924 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:48:53.002934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:48:53.002945 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:48:53.002957 systemd[1]: Stopped verity-setup.service. Sep 8 23:48:53.002967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:48:53.002978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:48:53.003039 systemd-journald[1158]: Collecting audit messages is disabled. Sep 8 23:48:53.003066 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:48:53.003077 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:48:53.003092 systemd-journald[1158]: Journal started Sep 8 23:48:53.003116 systemd-journald[1158]: Runtime Journal (/run/log/journal/9c2233fb36ff40cda00c2a0acc4bd096) is 6M, max 48.5M, 42.4M free. Sep 8 23:48:52.775004 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:48:52.799480 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:48:52.799889 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:48:53.005720 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:48:53.006390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:48:53.007560 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:48:53.008720 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:48:53.011512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:48:53.012766 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:48:53.012934 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:48:53.014322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:48:53.014520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:48:53.015985 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:48:53.016134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:48:53.018788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:48:53.018953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:48:53.020520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:48:53.020672 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:48:53.022022 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:48:53.022188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:48:53.023537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:48:53.025925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:48:53.027590 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:48:53.029245 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:48:53.045940 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:48:53.049599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:48:53.052053 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:48:53.053832 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:48:53.053895 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:48:53.056819 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:48:53.060415 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:48:53.061520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:48:53.063013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:48:53.065201 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:48:53.066364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:48:53.067353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:48:53.068687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:48:53.073638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:48:53.075441 systemd-journald[1158]: Time spent on flushing to /var/log/journal/9c2233fb36ff40cda00c2a0acc4bd096 is 38.130ms for 891 entries. Sep 8 23:48:53.075441 systemd-journald[1158]: System Journal (/var/log/journal/9c2233fb36ff40cda00c2a0acc4bd096) is 8M, max 195.6M, 187.6M free. Sep 8 23:48:53.127571 systemd-journald[1158]: Received client request to flush runtime journal. Sep 8 23:48:53.127672 kernel: loop0: detected capacity change from 0 to 100608 Sep 8 23:48:53.076059 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:48:53.078487 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:48:53.132637 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:48:53.082073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:48:53.083834 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:48:53.085333 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:48:53.101135 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:48:53.102942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:48:53.105113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:48:53.110704 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:48:53.122036 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:48:53.128737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:48:53.134497 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:48:53.142900 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:48:53.156497 kernel: loop1: detected capacity change from 0 to 119320 Sep 8 23:48:53.155949 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 8 23:48:53.155967 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 8 23:48:53.162511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:48:53.198063 kernel: loop2: detected capacity change from 0 to 211168 Sep 8 23:48:53.221512 kernel: loop3: detected capacity change from 0 to 100608 Sep 8 23:48:53.226543 kernel: loop4: detected capacity change from 0 to 119320 Sep 8 23:48:53.232643 kernel: loop5: detected capacity change from 0 to 211168 Sep 8 23:48:53.237800 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:48:53.238586 (sd-merge)[1229]: Merged extensions into '/usr'. Sep 8 23:48:53.242031 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:48:53.242058 systemd[1]: Reloading... Sep 8 23:48:53.294510 zram_generator::config[1263]: No configuration found. Sep 8 23:48:53.362563 ldconfig[1201]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:48:53.439753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:48:53.440097 systemd[1]: Reloading finished in 197 ms. Sep 8 23:48:53.470269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:48:53.473646 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:48:53.494815 systemd[1]: Starting ensure-sysext.service... Sep 8 23:48:53.496553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:48:53.506137 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:48:53.506153 systemd[1]: Reloading... Sep 8 23:48:53.515136 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 8 23:48:53.515168 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 8 23:48:53.515409 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:48:53.515632 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:48:53.516261 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:48:53.516571 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 8 23:48:53.516633 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 8 23:48:53.518849 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:48:53.518863 systemd-tmpfiles[1291]: Skipping /boot Sep 8 23:48:53.524714 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:48:53.524729 systemd-tmpfiles[1291]: Skipping /boot Sep 8 23:48:53.561488 zram_generator::config[1318]: No configuration found. Sep 8 23:48:53.693870 systemd[1]: Reloading finished in 187 ms. Sep 8 23:48:53.712365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:48:53.717844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:48:53.726483 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:48:53.728612 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:48:53.736799 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:48:53.742639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:48:53.745095 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:48:53.748672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:48:53.754606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:48:53.764689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:48:53.769729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:48:53.772033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:48:53.773247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:48:53.773370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:48:53.775545 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:48:53.777093 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:48:53.785685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:48:53.789517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:48:53.791421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:48:53.791687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:48:53.792802 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Sep 8 23:48:53.793080 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:48:53.793157 augenrules[1383]: No rules Sep 8 23:48:53.793226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:48:53.794775 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:48:53.794948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:48:53.803804 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:48:53.808480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:48:53.809579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:48:53.811262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:48:53.826727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:48:53.828762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:48:53.834890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:48:53.835877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:48:53.835994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:48:53.837475 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:48:53.840989 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:48:53.842640 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:48:53.843825 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:48:53.847442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:48:53.847617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:48:53.851968 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:48:53.852329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:48:53.856570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:48:53.856743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:48:53.858195 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:48:53.858346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:48:53.863023 systemd[1]: Finished ensure-sysext.service. Sep 8 23:48:53.873980 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:48:53.884065 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:48:53.884238 augenrules[1394]: /sbin/augenrules: No change Sep 8 23:48:53.890836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:48:53.892609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:48:53.892675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:48:53.894920 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:48:53.899780 augenrules[1457]: No rules Sep 8 23:48:53.901888 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:48:53.902556 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:48:53.921787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:48:53.924824 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:48:53.951621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:48:53.960785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:48:54.016246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:48:54.074523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:48:54.092624 systemd-networkd[1452]: lo: Link UP Sep 8 23:48:54.092943 systemd-networkd[1452]: lo: Gained carrier Sep 8 23:48:54.093872 systemd-networkd[1452]: Enumeration completed Sep 8 23:48:54.094071 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:48:54.094498 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:48:54.094575 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:48:54.095137 systemd-networkd[1452]: eth0: Link UP Sep 8 23:48:54.095343 systemd-networkd[1452]: eth0: Gained carrier Sep 8 23:48:54.095411 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:48:54.096276 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:48:54.098662 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:48:54.104306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:48:54.105585 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:48:54.114532 systemd-networkd[1452]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:48:54.117564 systemd-timesyncd[1456]: Network configuration changed, trying to establish connection. Sep 8 23:48:54.118573 systemd-timesyncd[1456]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:48:54.118620 systemd-timesyncd[1456]: Initial clock synchronization to Mon 2025-09-08 23:48:54.098542 UTC. Sep 8 23:48:54.120726 systemd-resolved[1357]: Positive Trust Anchors: Sep 8 23:48:54.120745 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:48:54.120777 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:48:54.121148 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:48:54.126967 systemd-resolved[1357]: Defaulting to hostname 'linux'. Sep 8 23:48:54.128316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:48:54.129369 systemd[1]: Reached target network.target - Network. Sep 8 23:48:54.130155 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:48:54.131279 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:48:54.132235 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:48:54.133284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:48:54.134513 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:48:54.135499 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:48:54.136436 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:48:54.137348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:48:54.137383 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:48:54.138150 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:48:54.139709 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:48:54.142004 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:48:54.144896 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:48:54.146089 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:48:54.147162 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:48:54.153362 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:48:54.154610 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:48:54.156116 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:48:54.157127 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:48:54.157943 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:48:54.158713 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:48:54.158743 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:48:54.159714 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:48:54.161555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:48:54.163281 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:48:54.165574 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:48:54.167551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:48:54.168353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:48:54.169531 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:48:54.172178 jq[1505]: false Sep 8 23:48:54.172655 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:48:54.175648 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:48:54.178641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:48:54.180168 extend-filesystems[1506]: Found /dev/vda6 Sep 8 23:48:54.184304 extend-filesystems[1506]: Found /dev/vda9 Sep 8 23:48:54.185590 extend-filesystems[1506]: Checking size of /dev/vda9 Sep 8 23:48:54.186794 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:48:54.188886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:48:54.189383 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:48:54.190885 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:48:54.192554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:48:54.196548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:48:54.198877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:48:54.199065 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:48:54.199323 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:48:54.199506 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:48:54.201188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:48:54.201363 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:48:54.220561 update_engine[1524]: I20250908 23:48:54.220278 1524 main.cc:92] Flatcar Update Engine starting Sep 8 23:48:54.221028 extend-filesystems[1506]: Resized partition /dev/vda9 Sep 8 23:48:54.225445 extend-filesystems[1542]: resize2fs 1.47.2 (1-Jan-2025) Sep 8 23:48:54.226436 (ntainerd)[1537]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:48:54.231084 jq[1526]: true Sep 8 23:48:54.235507 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:48:54.236198 tar[1531]: linux-arm64/LICENSE Sep 8 23:48:54.236485 tar[1531]: linux-arm64/helm Sep 8 23:48:54.262044 jq[1545]: true Sep 8 23:48:54.268201 dbus-daemon[1503]: [system] SELinux support is enabled Sep 8 23:48:54.269055 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:48:54.269588 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:48:54.270344 systemd-logind[1517]: New seat seat0. Sep 8 23:48:54.272735 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:48:54.273971 update_engine[1524]: I20250908 23:48:54.273917 1524 update_check_scheduler.cc:74] Next update check in 2m12s Sep 8 23:48:54.274939 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:48:54.274968 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:48:54.275710 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:48:54.276646 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:48:54.276672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:48:54.277425 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 8 23:48:54.277710 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:48:54.283673 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:48:54.291230 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:48:54.291230 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:48:54.291230 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:48:54.299131 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Sep 8 23:48:54.292505 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:48:54.294840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:48:54.324879 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:48:54.326099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:48:54.330179 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:48:54.331123 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:48:54.416812 containerd[1537]: time="2025-09-08T23:48:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 8 23:48:54.417778 containerd[1537]: time="2025-09-08T23:48:54.417748520Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 8 23:48:54.429979 containerd[1537]: time="2025-09-08T23:48:54.429925520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.8µs" Sep 8 23:48:54.429979 containerd[1537]: time="2025-09-08T23:48:54.429963280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 8 23:48:54.429979 containerd[1537]: time="2025-09-08T23:48:54.429986200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 8 23:48:54.430154 containerd[1537]: time="2025-09-08T23:48:54.430135000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 8 23:48:54.430178 containerd[1537]: time="2025-09-08T23:48:54.430160040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 8 23:48:54.430196 containerd[1537]: time="2025-09-08T23:48:54.430183520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430261 containerd[1537]: time="2025-09-08T23:48:54.430244640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430281 containerd[1537]: time="2025-09-08T23:48:54.430262840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430674 containerd[1537]: time="2025-09-08T23:48:54.430640360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430746 containerd[1537]: time="2025-09-08T23:48:54.430664920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430746 containerd[1537]: time="2025-09-08T23:48:54.430728120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430746 containerd[1537]: time="2025-09-08T23:48:54.430738840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 8 23:48:54.430910 containerd[1537]: time="2025-09-08T23:48:54.430888200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 8 23:48:54.431328 containerd[1537]: time="2025-09-08T23:48:54.431285440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:48:54.431402 containerd[1537]: time="2025-09-08T23:48:54.431338720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:48:54.431423 containerd[1537]: time="2025-09-08T23:48:54.431402920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 8 23:48:54.431550 containerd[1537]: time="2025-09-08T23:48:54.431529880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 8 23:48:54.432037 containerd[1537]: time="2025-09-08T23:48:54.432010480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 8 23:48:54.432197 containerd[1537]: time="2025-09-08T23:48:54.432179480Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:48:54.435536 containerd[1537]: time="2025-09-08T23:48:54.435497960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 8 23:48:54.435572 containerd[1537]: time="2025-09-08T23:48:54.435553800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 8 23:48:54.435572 containerd[1537]: time="2025-09-08T23:48:54.435568080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 8 23:48:54.435621 containerd[1537]: time="2025-09-08T23:48:54.435580840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 8 23:48:54.435621 containerd[1537]: time="2025-09-08T23:48:54.435592800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 8 23:48:54.435621 containerd[1537]: time="2025-09-08T23:48:54.435617600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 8 23:48:54.435672 containerd[1537]: time="2025-09-08T23:48:54.435629280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 8 23:48:54.435672 containerd[1537]: time="2025-09-08T23:48:54.435643120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 8 23:48:54.435672 containerd[1537]: time="2025-09-08T23:48:54.435654320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 8 23:48:54.435672 containerd[1537]: time="2025-09-08T23:48:54.435667000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 8 23:48:54.435738 containerd[1537]: time="2025-09-08T23:48:54.435676560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 8 23:48:54.435738 containerd[1537]: time="2025-09-08T23:48:54.435689040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 8 23:48:54.435976 containerd[1537]: time="2025-09-08T23:48:54.435939960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 8 23:48:54.436029 containerd[1537]: time="2025-09-08T23:48:54.436012120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 8 23:48:54.436058 containerd[1537]: time="2025-09-08T23:48:54.436036240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 8 23:48:54.436058 containerd[1537]: time="2025-09-08T23:48:54.436047960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 8 23:48:54.436095 containerd[1537]: time="2025-09-08T23:48:54.436057320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 8 23:48:54.436095 containerd[1537]: time="2025-09-08T23:48:54.436067960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 8 23:48:54.436300 containerd[1537]: time="2025-09-08T23:48:54.436271920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 8 23:48:54.436325 containerd[1537]: time="2025-09-08T23:48:54.436303960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 8 23:48:54.436325 containerd[1537]: time="2025-09-08T23:48:54.436317360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 8 23:48:54.436361 containerd[1537]: time="2025-09-08T23:48:54.436327640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 8 23:48:54.436361 containerd[1537]: time="2025-09-08T23:48:54.436341200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 8 23:48:54.436552 containerd[1537]: time="2025-09-08T23:48:54.436537960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 8 23:48:54.436573 containerd[1537]: time="2025-09-08T23:48:54.436558440Z" level=info msg="Start snapshots syncer" Sep 8 23:48:54.436703 containerd[1537]: time="2025-09-08T23:48:54.436685880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 8 23:48:54.437295 containerd[1537]: time="2025-09-08T23:48:54.437238920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 8 23:48:54.437380 containerd[1537]: time="2025-09-08T23:48:54.437354920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 8 23:48:54.437594 containerd[1537]: time="2025-09-08T23:48:54.437573960Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 8 23:48:54.437912 containerd[1537]: time="2025-09-08T23:48:54.437887840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 8 23:48:54.437980 containerd[1537]: time="2025-09-08T23:48:54.437921840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 8 23:48:54.438007 containerd[1537]: time="2025-09-08T23:48:54.437989240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 8 23:48:54.438026 containerd[1537]: time="2025-09-08T23:48:54.438007080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 8 23:48:54.438074 containerd[1537]: time="2025-09-08T23:48:54.438057840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 8 23:48:54.438094 containerd[1537]: time="2025-09-08T23:48:54.438077840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 8 23:48:54.438094 containerd[1537]: time="2025-09-08T23:48:54.438090640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 8 23:48:54.438127 containerd[1537]: time="2025-09-08T23:48:54.438114080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 8 23:48:54.438176 containerd[1537]: time="2025-09-08T23:48:54.438161560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 8 23:48:54.438198 containerd[1537]: time="2025-09-08T23:48:54.438181440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 8 23:48:54.438371 containerd[1537]: time="2025-09-08T23:48:54.438348440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:48:54.438467 containerd[1537]: time="2025-09-08T23:48:54.438452240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:48:54.438561 containerd[1537]: time="2025-09-08T23:48:54.438543280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:48:54.438580 containerd[1537]: time="2025-09-08T23:48:54.438566200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:48:54.438580 containerd[1537]: time="2025-09-08T23:48:54.438576120Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 8 23:48:54.438619 containerd[1537]: time="2025-09-08T23:48:54.438586880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 8 23:48:54.438619 containerd[1537]: time="2025-09-08T23:48:54.438597800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 8 23:48:54.438728 containerd[1537]: time="2025-09-08T23:48:54.438713720Z" level=info msg="runtime interface created" Sep 8 23:48:54.438747 containerd[1537]: time="2025-09-08T23:48:54.438726680Z" level=info msg="created NRI interface" Sep 8 23:48:54.438747 containerd[1537]: time="2025-09-08T23:48:54.438738080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 8 23:48:54.438804 containerd[1537]: time="2025-09-08T23:48:54.438750000Z" level=info msg="Connect containerd service" Sep 8 23:48:54.438842 containerd[1537]: time="2025-09-08T23:48:54.438831480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:48:54.440303 containerd[1537]: time="2025-09-08T23:48:54.440210040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509187280Z" level=info msg="Start subscribing containerd event" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509277360Z" level=info msg="Start recovering state" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509372120Z" level=info msg="Start event monitor" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509387400Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509397320Z" level=info msg="Start streaming server" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509406480Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509413640Z" level=info msg="runtime interface starting up..." Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509419640Z" level=info msg="starting plugins..." Sep 8 23:48:54.509596 containerd[1537]: time="2025-09-08T23:48:54.509434080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 8 23:48:54.510696 containerd[1537]: time="2025-09-08T23:48:54.510661360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:48:54.510816 containerd[1537]: time="2025-09-08T23:48:54.510801480Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:48:54.510941 containerd[1537]: time="2025-09-08T23:48:54.510927760Z" level=info msg="containerd successfully booted in 0.094475s" Sep 8 23:48:54.511041 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:48:54.557166 tar[1531]: linux-arm64/README.md Sep 8 23:48:54.578510 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:48:54.808276 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:48:54.827552 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:48:54.829981 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:48:54.848516 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:48:54.849565 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:48:54.851940 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:48:54.880846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:48:54.883087 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:48:54.884929 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:48:54.886134 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:48:56.052601 systemd-networkd[1452]: eth0: Gained IPv6LL Sep 8 23:48:56.056508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:48:56.057875 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:48:56.060032 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:48:56.062331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:48:56.078995 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:48:56.093128 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:48:56.093322 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:48:56.095081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:48:56.097880 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:48:56.643199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:48:56.644735 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:48:56.646238 systemd[1]: Startup finished in 2.002s (kernel) + 5.769s (initrd) + 4.310s (userspace) = 12.081s. Sep 8 23:48:56.647142 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:48:57.021925 kubelet[1634]: E0908 23:48:57.021801 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:48:57.024492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:48:57.024623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:48:57.025015 systemd[1]: kubelet.service: Consumed 760ms CPU time, 255.9M memory peak. Sep 8 23:48:59.009921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:48:59.011758 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:36472.service - OpenSSH per-connection server daemon (10.0.0.1:36472). Sep 8 23:48:59.098083 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 36472 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:48:59.099955 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:59.111604 systemd-logind[1517]: New session 1 of user core. Sep 8 23:48:59.113457 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:48:59.114643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:48:59.140056 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:48:59.146855 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:48:59.162553 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:48:59.164672 systemd-logind[1517]: New session c1 of user core. Sep 8 23:48:59.282658 systemd[1652]: Queued start job for default target default.target. Sep 8 23:48:59.295346 systemd[1652]: Created slice app.slice - User Application Slice. Sep 8 23:48:59.295376 systemd[1652]: Reached target paths.target - Paths. Sep 8 23:48:59.295412 systemd[1652]: Reached target timers.target - Timers. Sep 8 23:48:59.296560 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:48:59.309020 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:48:59.309121 systemd[1652]: Reached target sockets.target - Sockets. Sep 8 23:48:59.309162 systemd[1652]: Reached target basic.target - Basic System. Sep 8 23:48:59.309188 systemd[1652]: Reached target default.target - Main User Target. Sep 8 23:48:59.309212 systemd[1652]: Startup finished in 139ms. Sep 8 23:48:59.309406 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:48:59.310762 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:48:59.374210 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:36478.service - OpenSSH per-connection server daemon (10.0.0.1:36478). Sep 8 23:48:59.443186 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 36478 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:48:59.445354 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:59.455592 systemd-logind[1517]: New session 2 of user core. Sep 8 23:48:59.465677 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:48:59.520882 sshd[1666]: Connection closed by 10.0.0.1 port 36478 Sep 8 23:48:59.521412 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:59.536440 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:36478.service: Deactivated successfully. Sep 8 23:48:59.538203 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:48:59.539186 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:48:59.540980 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:36480.service - OpenSSH per-connection server daemon (10.0.0.1:36480). Sep 8 23:48:59.542118 systemd-logind[1517]: Removed session 2. Sep 8 23:48:59.606144 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 36480 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:48:59.607401 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:59.611537 systemd-logind[1517]: New session 3 of user core. Sep 8 23:48:59.624637 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:48:59.672188 sshd[1675]: Connection closed by 10.0.0.1 port 36480 Sep 8 23:48:59.672685 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:59.686421 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:36480.service: Deactivated successfully. Sep 8 23:48:59.688985 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:48:59.691283 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:48:59.694260 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:36488.service - OpenSSH per-connection server daemon (10.0.0.1:36488). Sep 8 23:48:59.694727 systemd-logind[1517]: Removed session 3. Sep 8 23:48:59.751177 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 36488 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:48:59.752439 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:59.756274 systemd-logind[1517]: New session 4 of user core. Sep 8 23:48:59.763650 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:48:59.816226 sshd[1684]: Connection closed by 10.0.0.1 port 36488 Sep 8 23:48:59.817279 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:59.829294 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:36488.service: Deactivated successfully. Sep 8 23:48:59.831667 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:48:59.832255 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:48:59.835722 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:36494.service - OpenSSH per-connection server daemon (10.0.0.1:36494). Sep 8 23:48:59.836818 systemd-logind[1517]: Removed session 4. Sep 8 23:48:59.891742 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36494 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:48:59.892978 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:48:59.897528 systemd-logind[1517]: New session 5 of user core. Sep 8 23:48:59.903632 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:48:59.962449 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:48:59.962731 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:48:59.974268 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 8 23:48:59.975605 sshd[1693]: Connection closed by 10.0.0.1 port 36494 Sep 8 23:48:59.975952 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:59.990439 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:36494.service: Deactivated successfully. Sep 8 23:48:59.992696 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:48:59.993347 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:48:59.995366 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). Sep 8 23:48:59.995808 systemd-logind[1517]: Removed session 5. Sep 8 23:49:00.054360 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:49:00.055706 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:00.060315 systemd-logind[1517]: New session 6 of user core. Sep 8 23:49:00.066636 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:49:00.116987 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:49:00.117518 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:49:00.121959 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 8 23:49:00.126529 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:49:00.126799 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:49:00.134610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:49:00.174915 augenrules[1727]: No rules Sep 8 23:49:00.175695 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:49:00.176028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:49:00.176933 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 8 23:49:00.178241 sshd[1703]: Connection closed by 10.0.0.1 port 37270 Sep 8 23:49:00.178617 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:00.187263 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:37270.service: Deactivated successfully. Sep 8 23:49:00.188654 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:49:00.189584 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:49:00.193195 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). Sep 8 23:49:00.194217 systemd-logind[1517]: Removed session 6. Sep 8 23:49:00.245922 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:49:00.247176 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:00.251546 systemd-logind[1517]: New session 7 of user core. Sep 8 23:49:00.260631 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:49:00.312043 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:49:00.312624 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:49:00.574174 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:49:00.586785 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:49:00.783705 dockerd[1760]: time="2025-09-08T23:49:00.783640325Z" level=info msg="Starting up" Sep 8 23:49:00.784442 dockerd[1760]: time="2025-09-08T23:49:00.784422739Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 8 23:49:00.796020 dockerd[1760]: time="2025-09-08T23:49:00.795970519Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 8 23:49:00.936070 dockerd[1760]: time="2025-09-08T23:49:00.935949053Z" level=info msg="Loading containers: start." Sep 8 23:49:00.946501 kernel: Initializing XFRM netlink socket Sep 8 23:49:01.146716 systemd-networkd[1452]: docker0: Link UP Sep 8 23:49:01.152528 dockerd[1760]: time="2025-09-08T23:49:01.152162503Z" level=info msg="Loading containers: done." Sep 8 23:49:01.168312 dockerd[1760]: time="2025-09-08T23:49:01.167966295Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:49:01.168312 dockerd[1760]: time="2025-09-08T23:49:01.168060176Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 8 23:49:01.168312 dockerd[1760]: time="2025-09-08T23:49:01.168157014Z" level=info msg="Initializing buildkit" Sep 8 23:49:01.191242 dockerd[1760]: time="2025-09-08T23:49:01.191144170Z" level=info msg="Completed buildkit initialization" Sep 8 23:49:01.197858 dockerd[1760]: time="2025-09-08T23:49:01.197806174Z" level=info msg="Daemon has completed initialization" Sep 8 23:49:01.197998 dockerd[1760]: time="2025-09-08T23:49:01.197884668Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:49:01.198072 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:49:01.798180 containerd[1537]: time="2025-09-08T23:49:01.798087408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 8 23:49:02.482352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131217661.mount: Deactivated successfully. Sep 8 23:49:03.327354 containerd[1537]: time="2025-09-08T23:49:03.327303525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:03.328595 containerd[1537]: time="2025-09-08T23:49:03.328566186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 8 23:49:03.329496 containerd[1537]: time="2025-09-08T23:49:03.329460921Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:03.334411 containerd[1537]: time="2025-09-08T23:49:03.334372829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:03.335645 containerd[1537]: time="2025-09-08T23:49:03.335608871Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.537478219s" Sep 8 23:49:03.335690 containerd[1537]: time="2025-09-08T23:49:03.335646203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 8 23:49:03.336725 containerd[1537]: time="2025-09-08T23:49:03.336703217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 8 23:49:04.386788 containerd[1537]: time="2025-09-08T23:49:04.386734342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:04.388122 containerd[1537]: time="2025-09-08T23:49:04.388088958Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 8 23:49:04.389039 containerd[1537]: time="2025-09-08T23:49:04.389014433Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:04.391869 containerd[1537]: time="2025-09-08T23:49:04.391831789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:04.392679 containerd[1537]: time="2025-09-08T23:49:04.392646901Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.055914545s" Sep 8 23:49:04.392717 containerd[1537]: time="2025-09-08T23:49:04.392681597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 8 23:49:04.393103 containerd[1537]: time="2025-09-08T23:49:04.393066969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 8 23:49:05.782391 containerd[1537]: time="2025-09-08T23:49:05.782341683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:05.782927 containerd[1537]: time="2025-09-08T23:49:05.782858345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 8 23:49:05.784243 containerd[1537]: time="2025-09-08T23:49:05.783919652Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:05.786504 containerd[1537]: time="2025-09-08T23:49:05.786475023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:05.787766 containerd[1537]: time="2025-09-08T23:49:05.787739197Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.394642928s" Sep 8 23:49:05.787808 containerd[1537]: time="2025-09-08T23:49:05.787775533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 8 23:49:05.788193 containerd[1537]: time="2025-09-08T23:49:05.788173953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 8 23:49:06.806827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236697655.mount: Deactivated successfully. Sep 8 23:49:07.070733 containerd[1537]: time="2025-09-08T23:49:07.070621846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:07.071406 containerd[1537]: time="2025-09-08T23:49:07.071371176Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 8 23:49:07.072219 containerd[1537]: time="2025-09-08T23:49:07.072156365Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:07.073883 containerd[1537]: time="2025-09-08T23:49:07.073834682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:07.074409 containerd[1537]: time="2025-09-08T23:49:07.074377090Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.286174716s" Sep 8 23:49:07.074450 containerd[1537]: time="2025-09-08T23:49:07.074413829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 8 23:49:07.075059 containerd[1537]: time="2025-09-08T23:49:07.074857614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 8 23:49:07.275059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:49:07.276510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:49:07.413977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:07.417612 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:49:07.469806 kubelet[2056]: E0908 23:49:07.469761 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:49:07.473677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:49:07.473797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:49:07.474254 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.9M memory peak. Sep 8 23:49:07.655073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680537132.mount: Deactivated successfully. Sep 8 23:49:08.527411 containerd[1537]: time="2025-09-08T23:49:08.527361209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:08.528616 containerd[1537]: time="2025-09-08T23:49:08.528582191Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 8 23:49:08.529549 containerd[1537]: time="2025-09-08T23:49:08.529521286Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:08.532553 containerd[1537]: time="2025-09-08T23:49:08.532522071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:08.533985 containerd[1537]: time="2025-09-08T23:49:08.533945785Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.459046314s" Sep 8 23:49:08.534033 containerd[1537]: time="2025-09-08T23:49:08.533987362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 8 23:49:08.534780 containerd[1537]: time="2025-09-08T23:49:08.534739118Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:49:08.972098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868981432.mount: Deactivated successfully. Sep 8 23:49:08.978820 containerd[1537]: time="2025-09-08T23:49:08.978760528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:49:08.979270 containerd[1537]: time="2025-09-08T23:49:08.979222040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:49:08.981308 containerd[1537]: time="2025-09-08T23:49:08.981266419Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:49:08.985420 containerd[1537]: time="2025-09-08T23:49:08.985354699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:49:08.986545 containerd[1537]: time="2025-09-08T23:49:08.986155468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.369696ms" Sep 8 23:49:08.986545 containerd[1537]: time="2025-09-08T23:49:08.986184852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:49:08.986687 containerd[1537]: time="2025-09-08T23:49:08.986660756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 8 23:49:09.444830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220538910.mount: Deactivated successfully. Sep 8 23:49:10.927156 containerd[1537]: time="2025-09-08T23:49:10.927092499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:10.927708 containerd[1537]: time="2025-09-08T23:49:10.927676502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 8 23:49:10.928687 containerd[1537]: time="2025-09-08T23:49:10.928633570Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:10.931217 containerd[1537]: time="2025-09-08T23:49:10.931176927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:10.932324 containerd[1537]: time="2025-09-08T23:49:10.932301235Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.945614572s" Sep 8 23:49:10.932373 containerd[1537]: time="2025-09-08T23:49:10.932330421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 8 23:49:17.132129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:17.132403 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.9M memory peak. Sep 8 23:49:17.134527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:49:17.156714 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... Sep 8 23:49:17.156736 systemd[1]: Reloading... Sep 8 23:49:17.233605 zram_generator::config[2249]: No configuration found. Sep 8 23:49:17.430109 systemd[1]: Reloading finished in 273 ms. Sep 8 23:49:17.492019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 8 23:49:17.492103 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 8 23:49:17.492352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:17.492412 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95M memory peak. Sep 8 23:49:17.493980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:49:17.647012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:17.651736 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:49:17.688516 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:49:17.688516 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:49:17.688516 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:49:17.688516 kubelet[2293]: I0908 23:49:17.688450 2293 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:49:18.437248 kubelet[2293]: I0908 23:49:18.437174 2293 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:49:18.437248 kubelet[2293]: I0908 23:49:18.437211 2293 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:49:18.437450 kubelet[2293]: I0908 23:49:18.437423 2293 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:49:18.462762 kubelet[2293]: E0908 23:49:18.460915 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:49:18.468186 kubelet[2293]: I0908 23:49:18.468129 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:49:18.476191 kubelet[2293]: I0908 23:49:18.476146 2293 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:49:18.479851 kubelet[2293]: I0908 23:49:18.479830 2293 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:49:18.480945 kubelet[2293]: I0908 23:49:18.480901 2293 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:49:18.481109 kubelet[2293]: I0908 23:49:18.480945 2293 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:49:18.481196 kubelet[2293]: I0908 23:49:18.481171 2293 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:49:18.481196 kubelet[2293]: I0908 23:49:18.481179 2293 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:49:18.481383 kubelet[2293]: I0908 23:49:18.481368 2293 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:49:18.494441 kubelet[2293]: I0908 23:49:18.494389 2293 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:49:18.494441 kubelet[2293]: I0908 23:49:18.494428 2293 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:49:18.494581 kubelet[2293]: I0908 23:49:18.494565 2293 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:49:18.497749 kubelet[2293]: I0908 23:49:18.497538 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:49:18.499798 kubelet[2293]: I0908 23:49:18.499766 2293 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 8 23:49:18.500158 kubelet[2293]: E0908 23:49:18.500104 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:49:18.501127 kubelet[2293]: I0908 23:49:18.501094 2293 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:49:18.501409 kubelet[2293]: W0908 23:49:18.501380 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:49:18.503218 kubelet[2293]: E0908 23:49:18.503158 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:49:18.504294 kubelet[2293]: I0908 23:49:18.504262 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:49:18.504364 kubelet[2293]: I0908 23:49:18.504301 2293 server.go:1289] "Started kubelet" Sep 8 23:49:18.504423 kubelet[2293]: I0908 23:49:18.504386 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:49:18.508564 kubelet[2293]: I0908 23:49:18.508488 2293 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:49:18.510622 kubelet[2293]: I0908 23:49:18.508917 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:49:18.510622 kubelet[2293]: I0908 23:49:18.509298 2293 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:49:18.512182 kubelet[2293]: I0908 23:49:18.512147 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:49:18.513542 kubelet[2293]: I0908 23:49:18.513508 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:49:18.514037 kubelet[2293]: E0908 23:49:18.512479 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637393dcac1efe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:49:18.504279806 +0000 UTC m=+0.848261270,LastTimestamp:2025-09-08 23:49:18.504279806 +0000 UTC m=+0.848261270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:49:18.515333 kubelet[2293]: I0908 23:49:18.514152 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:49:18.515333 kubelet[2293]: I0908 23:49:18.514243 2293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:49:18.515333 kubelet[2293]: I0908 23:49:18.514283 2293 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:49:18.515333 kubelet[2293]: E0908 23:49:18.514625 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:49:18.515333 kubelet[2293]: E0908 23:49:18.514708 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:49:18.515333 kubelet[2293]: E0908 23:49:18.515152 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Sep 8 23:49:18.515732 kubelet[2293]: I0908 23:49:18.515713 2293 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:49:18.515818 kubelet[2293]: I0908 23:49:18.515797 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:49:18.516372 kubelet[2293]: E0908 23:49:18.516347 2293 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:49:18.516971 kubelet[2293]: I0908 23:49:18.516947 2293 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:49:18.531540 kubelet[2293]: I0908 23:49:18.531203 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:49:18.531540 kubelet[2293]: I0908 23:49:18.531225 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:49:18.531540 kubelet[2293]: I0908 23:49:18.531245 2293 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:49:18.532757 kubelet[2293]: I0908 23:49:18.532708 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:49:18.533926 kubelet[2293]: I0908 23:49:18.533869 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:49:18.533926 kubelet[2293]: I0908 23:49:18.533894 2293 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:49:18.533926 kubelet[2293]: I0908 23:49:18.533911 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:49:18.533926 kubelet[2293]: I0908 23:49:18.533920 2293 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:49:18.534055 kubelet[2293]: E0908 23:49:18.533959 2293 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:49:18.602535 kubelet[2293]: E0908 23:49:18.602452 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:49:18.602535 kubelet[2293]: I0908 23:49:18.602507 2293 policy_none.go:49] "None policy: Start" Sep 8 23:49:18.602535 kubelet[2293]: I0908 23:49:18.602537 2293 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:49:18.602535 kubelet[2293]: I0908 23:49:18.602551 2293 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:49:18.614659 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:49:18.615296 kubelet[2293]: E0908 23:49:18.615001 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:49:18.628013 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:49:18.634055 kubelet[2293]: E0908 23:49:18.634019 2293 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:49:18.649611 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:49:18.651836 kubelet[2293]: E0908 23:49:18.651801 2293 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:49:18.652089 kubelet[2293]: I0908 23:49:18.652022 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:49:18.652089 kubelet[2293]: I0908 23:49:18.652036 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:49:18.652377 kubelet[2293]: I0908 23:49:18.652237 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:49:18.653302 kubelet[2293]: E0908 23:49:18.653239 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:49:18.653302 kubelet[2293]: E0908 23:49:18.653283 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:49:18.716698 kubelet[2293]: E0908 23:49:18.716582 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Sep 8 23:49:18.754913 kubelet[2293]: I0908 23:49:18.754871 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:49:18.755340 kubelet[2293]: E0908 23:49:18.755308 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 8 23:49:18.844896 systemd[1]: Created slice kubepods-burstable-pod5fe1e319900de41be00fc999086591a8.slice - libcontainer container kubepods-burstable-pod5fe1e319900de41be00fc999086591a8.slice. Sep 8 23:49:18.869543 kubelet[2293]: E0908 23:49:18.869459 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:18.873675 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 8 23:49:18.875693 kubelet[2293]: E0908 23:49:18.875664 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:18.878014 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 8 23:49:18.879754 kubelet[2293]: E0908 23:49:18.879729 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:18.915966 kubelet[2293]: I0908 23:49:18.915924 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:18.915966 kubelet[2293]: I0908 23:49:18.915964 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:18.916171 kubelet[2293]: I0908 23:49:18.915987 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:18.916171 kubelet[2293]: I0908 23:49:18.916028 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:18.916171 kubelet[2293]: I0908 23:49:18.916061 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:18.916171 kubelet[2293]: I0908 23:49:18.916080 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:18.916171 kubelet[2293]: I0908 23:49:18.916102 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:18.916295 kubelet[2293]: I0908 23:49:18.916118 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:49:18.916295 kubelet[2293]: I0908 23:49:18.916133 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:18.957248 kubelet[2293]: I0908 23:49:18.957217 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:49:18.957578 kubelet[2293]: E0908 23:49:18.957547 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 8 23:49:19.117852 kubelet[2293]: E0908 23:49:19.117725 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Sep 8 23:49:19.169968 kubelet[2293]: E0908 23:49:19.169925 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.170604 containerd[1537]: time="2025-09-08T23:49:19.170571933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5fe1e319900de41be00fc999086591a8,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:19.176501 kubelet[2293]: E0908 23:49:19.176192 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.176688 containerd[1537]: time="2025-09-08T23:49:19.176653484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:19.181168 kubelet[2293]: E0908 23:49:19.180945 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.181432 containerd[1537]: time="2025-09-08T23:49:19.181404547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:19.204169 containerd[1537]: time="2025-09-08T23:49:19.204131374Z" level=info msg="connecting to shim f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb" address="unix:///run/containerd/s/77f5a43b7c0f8ac430f7a4321aaa2386589dc73acdf5051950dd98f5aae4f0ae" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:19.217266 containerd[1537]: time="2025-09-08T23:49:19.217081708Z" level=info msg="connecting to shim 7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e" address="unix:///run/containerd/s/5aad69f4c494d23288f9af9ca3c15bde7e22b16b3ac3ac4742a4a5537acc2746" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:19.224678 containerd[1537]: time="2025-09-08T23:49:19.224636949Z" level=info msg="connecting to shim e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923" address="unix:///run/containerd/s/179af1cdd5165621b1dad84e0dc530957dbb6d7bf4312d9ea14c9496dcd363ba" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:19.240625 systemd[1]: Started cri-containerd-f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb.scope - libcontainer container f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb. Sep 8 23:49:19.246036 systemd[1]: Started cri-containerd-7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e.scope - libcontainer container 7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e. Sep 8 23:49:19.247016 systemd[1]: Started cri-containerd-e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923.scope - libcontainer container e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923. Sep 8 23:49:19.301085 containerd[1537]: time="2025-09-08T23:49:19.301047573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e\"" Sep 8 23:49:19.301656 containerd[1537]: time="2025-09-08T23:49:19.301363049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923\"" Sep 8 23:49:19.302922 kubelet[2293]: E0908 23:49:19.302891 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.303776 kubelet[2293]: E0908 23:49:19.303719 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.306535 containerd[1537]: time="2025-09-08T23:49:19.306385201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5fe1e319900de41be00fc999086591a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb\"" Sep 8 23:49:19.307206 kubelet[2293]: E0908 23:49:19.307185 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.308491 containerd[1537]: time="2025-09-08T23:49:19.308299734Z" level=info msg="CreateContainer within sandbox \"e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:49:19.310538 containerd[1537]: time="2025-09-08T23:49:19.310181876Z" level=info msg="CreateContainer within sandbox \"7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:49:19.311863 containerd[1537]: time="2025-09-08T23:49:19.311832559Z" level=info msg="CreateContainer within sandbox \"f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:49:19.317449 containerd[1537]: time="2025-09-08T23:49:19.317359257Z" level=info msg="Container a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:19.320639 containerd[1537]: time="2025-09-08T23:49:19.319911142Z" level=info msg="Container ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:19.321560 containerd[1537]: time="2025-09-08T23:49:19.321498282Z" level=info msg="Container 548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:19.328800 containerd[1537]: time="2025-09-08T23:49:19.328739886Z" level=info msg="CreateContainer within sandbox \"e3eae6d83ee239d4e233cd15b0542cd877e648b9d56de3491718fffcec3e3923\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43\"" Sep 8 23:49:19.329847 containerd[1537]: time="2025-09-08T23:49:19.329811723Z" level=info msg="StartContainer for \"a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43\"" Sep 8 23:49:19.331751 containerd[1537]: time="2025-09-08T23:49:19.331713260Z" level=info msg="connecting to shim a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43" address="unix:///run/containerd/s/179af1cdd5165621b1dad84e0dc530957dbb6d7bf4312d9ea14c9496dcd363ba" protocol=ttrpc version=3 Sep 8 23:49:19.332145 containerd[1537]: time="2025-09-08T23:49:19.332111194Z" level=info msg="CreateContainer within sandbox \"7d420f01abdc7b6bbddfc24c6569d99a1d7144aa37c863c1b602f48c0d33ef4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c\"" Sep 8 23:49:19.333152 containerd[1537]: time="2025-09-08T23:49:19.333121167Z" level=info msg="StartContainer for \"548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c\"" Sep 8 23:49:19.333819 containerd[1537]: time="2025-09-08T23:49:19.333769795Z" level=info msg="CreateContainer within sandbox \"f8157057476cf8a7bcec3018a2732a71987dd2638d8e52bd6ab8cba7674d04cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300\"" Sep 8 23:49:19.334235 containerd[1537]: time="2025-09-08T23:49:19.334206920Z" level=info msg="connecting to shim 548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c" address="unix:///run/containerd/s/5aad69f4c494d23288f9af9ca3c15bde7e22b16b3ac3ac4742a4a5537acc2746" protocol=ttrpc version=3 Sep 8 23:49:19.334750 containerd[1537]: time="2025-09-08T23:49:19.334704028Z" level=info msg="StartContainer for \"ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300\"" Sep 8 23:49:19.336484 containerd[1537]: time="2025-09-08T23:49:19.336413856Z" level=info msg="connecting to shim ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300" address="unix:///run/containerd/s/77f5a43b7c0f8ac430f7a4321aaa2386589dc73acdf5051950dd98f5aae4f0ae" protocol=ttrpc version=3 Sep 8 23:49:19.351679 systemd[1]: Started cri-containerd-a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43.scope - libcontainer container a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43. Sep 8 23:49:19.359675 systemd[1]: Started cri-containerd-548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c.scope - libcontainer container 548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c. Sep 8 23:49:19.360410 kubelet[2293]: I0908 23:49:19.360383 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:49:19.360751 kubelet[2293]: E0908 23:49:19.360726 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 8 23:49:19.361382 systemd[1]: Started cri-containerd-ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300.scope - libcontainer container ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300. Sep 8 23:49:19.405227 kubelet[2293]: E0908 23:49:19.405109 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:49:19.408715 containerd[1537]: time="2025-09-08T23:49:19.408626631Z" level=info msg="StartContainer for \"a690df11d8f34eea1a7c6d4df46d9e8c3e1c3ef4cd0a1c2eb7cfdc458a222c43\" returns successfully" Sep 8 23:49:19.411812 containerd[1537]: time="2025-09-08T23:49:19.411764240Z" level=info msg="StartContainer for \"ee86aacb5b296f54356321d040f3740787b8027bbdde0195e8a43b24e998e300\" returns successfully" Sep 8 23:49:19.420800 containerd[1537]: time="2025-09-08T23:49:19.420689039Z" level=info msg="StartContainer for \"548eaded88f9a0298cf100f0e2e4b9177418a188f630cf3bc2726a575554ba6c\" returns successfully" Sep 8 23:49:19.433031 kubelet[2293]: E0908 23:49:19.432983 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:49:19.543291 kubelet[2293]: E0908 23:49:19.543255 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:19.543441 kubelet[2293]: E0908 23:49:19.543419 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.545826 kubelet[2293]: E0908 23:49:19.545606 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:19.545826 kubelet[2293]: E0908 23:49:19.545764 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:19.547969 kubelet[2293]: E0908 23:49:19.547949 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:19.548100 kubelet[2293]: E0908 23:49:19.548082 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:20.162090 kubelet[2293]: I0908 23:49:20.162031 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:49:20.499045 kubelet[2293]: I0908 23:49:20.498715 2293 apiserver.go:52] "Watching apiserver" Sep 8 23:49:20.502477 kubelet[2293]: E0908 23:49:20.502437 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:49:20.514647 kubelet[2293]: I0908 23:49:20.514609 2293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:49:20.549716 kubelet[2293]: E0908 23:49:20.549627 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:20.549890 kubelet[2293]: E0908 23:49:20.549761 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:49:20.549890 kubelet[2293]: E0908 23:49:20.549784 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:20.549946 kubelet[2293]: E0908 23:49:20.549900 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:20.571288 kubelet[2293]: I0908 23:49:20.570752 2293 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:49:20.571288 kubelet[2293]: E0908 23:49:20.571084 2293 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:49:20.615808 kubelet[2293]: I0908 23:49:20.615770 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:49:20.621519 kubelet[2293]: E0908 23:49:20.620955 2293 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18637393dcac1efe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:49:18.504279806 +0000 UTC m=+0.848261270,LastTimestamp:2025-09-08 23:49:18.504279806 +0000 UTC m=+0.848261270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:49:20.673694 kubelet[2293]: E0908 23:49:20.673649 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:49:20.673694 kubelet[2293]: I0908 23:49:20.673691 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:20.675782 kubelet[2293]: E0908 23:49:20.675732 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:20.675782 kubelet[2293]: I0908 23:49:20.675770 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:20.679703 kubelet[2293]: E0908 23:49:20.679665 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:21.640302 kubelet[2293]: I0908 23:49:21.640243 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:21.646192 kubelet[2293]: E0908 23:49:21.646085 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:22.551110 kubelet[2293]: E0908 23:49:22.551077 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:23.061984 systemd[1]: Reload requested from client PID 2583 ('systemctl') (unit session-7.scope)... Sep 8 23:49:23.062304 systemd[1]: Reloading... Sep 8 23:49:23.130499 zram_generator::config[2626]: No configuration found. Sep 8 23:49:23.384234 systemd[1]: Reloading finished in 321 ms. Sep 8 23:49:23.406101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:49:23.417874 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:49:23.418250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:23.418397 systemd[1]: kubelet.service: Consumed 1.248s CPU time, 128.3M memory peak. Sep 8 23:49:23.420951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:49:23.580294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:49:23.597989 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:49:23.642715 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:49:23.642715 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:49:23.642715 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:49:23.642715 kubelet[2668]: I0908 23:49:23.642688 2668 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:49:23.648481 kubelet[2668]: I0908 23:49:23.648427 2668 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:49:23.648481 kubelet[2668]: I0908 23:49:23.648477 2668 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:49:23.648739 kubelet[2668]: I0908 23:49:23.648709 2668 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:49:23.650089 kubelet[2668]: I0908 23:49:23.650064 2668 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 8 23:49:23.654043 kubelet[2668]: I0908 23:49:23.653932 2668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:49:23.659866 kubelet[2668]: I0908 23:49:23.659827 2668 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:49:23.664962 kubelet[2668]: I0908 23:49:23.664925 2668 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:49:23.665197 kubelet[2668]: I0908 23:49:23.665165 2668 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:49:23.665349 kubelet[2668]: I0908 23:49:23.665195 2668 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:49:23.665440 kubelet[2668]: I0908 23:49:23.665360 2668 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:49:23.665440 kubelet[2668]: I0908 23:49:23.665368 2668 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:49:23.665440 kubelet[2668]: I0908 23:49:23.665424 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:49:23.665637 kubelet[2668]: I0908 23:49:23.665621 2668 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:49:23.665662 kubelet[2668]: I0908 23:49:23.665637 2668 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:49:23.665662 kubelet[2668]: I0908 23:49:23.665659 2668 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:49:23.665711 kubelet[2668]: I0908 23:49:23.665674 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:49:23.668544 kubelet[2668]: I0908 23:49:23.668502 2668 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 8 23:49:23.669131 kubelet[2668]: I0908 23:49:23.669110 2668 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:49:23.675936 kubelet[2668]: I0908 23:49:23.675902 2668 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:49:23.676053 kubelet[2668]: I0908 23:49:23.675962 2668 server.go:1289] "Started kubelet" Sep 8 23:49:23.677129 kubelet[2668]: I0908 23:49:23.677104 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:49:23.677797 kubelet[2668]: I0908 23:49:23.677753 2668 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:49:23.678792 kubelet[2668]: E0908 23:49:23.678769 2668 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:49:23.678926 kubelet[2668]: I0908 23:49:23.678914 2668 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:49:23.679162 kubelet[2668]: I0908 23:49:23.679141 2668 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:49:23.679340 kubelet[2668]: I0908 23:49:23.679327 2668 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:49:23.680087 kubelet[2668]: I0908 23:49:23.680046 2668 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:49:23.683875 kubelet[2668]: I0908 23:49:23.683811 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:49:23.683957 kubelet[2668]: I0908 23:49:23.683900 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:49:23.684224 kubelet[2668]: I0908 23:49:23.684201 2668 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:49:23.691348 kubelet[2668]: I0908 23:49:23.691317 2668 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:49:23.691637 kubelet[2668]: I0908 23:49:23.691611 2668 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:49:23.692793 kubelet[2668]: I0908 23:49:23.692756 2668 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:49:23.693066 kubelet[2668]: E0908 23:49:23.693041 2668 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:49:23.697734 kubelet[2668]: I0908 23:49:23.697684 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:49:23.708352 kubelet[2668]: I0908 23:49:23.708311 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:49:23.708352 kubelet[2668]: I0908 23:49:23.708350 2668 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:49:23.708522 kubelet[2668]: I0908 23:49:23.708370 2668 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:49:23.708522 kubelet[2668]: I0908 23:49:23.708377 2668 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:49:23.709011 kubelet[2668]: E0908 23:49:23.708971 2668 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:49:23.730737 kubelet[2668]: I0908 23:49:23.730708 2668 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:49:23.730737 kubelet[2668]: I0908 23:49:23.730728 2668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:49:23.730882 kubelet[2668]: I0908 23:49:23.730766 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:49:23.730934 kubelet[2668]: I0908 23:49:23.730912 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:49:23.730963 kubelet[2668]: I0908 23:49:23.730930 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:49:23.730963 kubelet[2668]: I0908 23:49:23.730951 2668 policy_none.go:49] "None policy: Start" Sep 8 23:49:23.730963 kubelet[2668]: I0908 23:49:23.730960 2668 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:49:23.731020 kubelet[2668]: I0908 23:49:23.730969 2668 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:49:23.731073 kubelet[2668]: I0908 23:49:23.731059 2668 state_mem.go:75] "Updated machine memory state" Sep 8 23:49:23.735219 kubelet[2668]: E0908 23:49:23.734749 2668 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:49:23.735219 kubelet[2668]: I0908 23:49:23.734930 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:49:23.735219 kubelet[2668]: I0908 23:49:23.734943 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:49:23.735219 kubelet[2668]: I0908 23:49:23.735148 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:49:23.737047 kubelet[2668]: E0908 23:49:23.737025 2668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:49:23.810193 kubelet[2668]: I0908 23:49:23.810154 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:23.810193 kubelet[2668]: I0908 23:49:23.810200 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:49:23.810483 kubelet[2668]: I0908 23:49:23.810454 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.816307 kubelet[2668]: E0908 23:49:23.816242 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.840642 kubelet[2668]: I0908 23:49:23.840534 2668 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:49:23.848316 kubelet[2668]: I0908 23:49:23.848284 2668 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:49:23.848455 kubelet[2668]: I0908 23:49:23.848386 2668 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:49:23.881411 kubelet[2668]: I0908 23:49:23.881265 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:23.881411 kubelet[2668]: I0908 23:49:23.881305 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:23.881411 kubelet[2668]: I0908 23:49:23.881329 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fe1e319900de41be00fc999086591a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5fe1e319900de41be00fc999086591a8\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:23.881411 kubelet[2668]: I0908 23:49:23.881346 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.881411 kubelet[2668]: I0908 23:49:23.881370 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.881650 kubelet[2668]: I0908 23:49:23.881386 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.881762 kubelet[2668]: I0908 23:49:23.881699 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:49:23.881762 kubelet[2668]: I0908 23:49:23.881726 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:23.881762 kubelet[2668]: I0908 23:49:23.881744 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:24.062093 sudo[2709]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:49:24.062393 sudo[2709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:49:24.115848 kubelet[2668]: E0908 23:49:24.115731 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.115848 kubelet[2668]: E0908 23:49:24.115782 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.118158 kubelet[2668]: E0908 23:49:24.118129 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.392177 sudo[2709]: pam_unix(sudo:session): session closed for user root Sep 8 23:49:24.666482 kubelet[2668]: I0908 23:49:24.666312 2668 apiserver.go:52] "Watching apiserver" Sep 8 23:49:24.679435 kubelet[2668]: I0908 23:49:24.679397 2668 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:49:24.722216 kubelet[2668]: I0908 23:49:24.722155 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:24.723203 kubelet[2668]: I0908 23:49:24.723166 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:24.724320 kubelet[2668]: E0908 23:49:24.723604 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.729936 kubelet[2668]: E0908 23:49:24.729851 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:49:24.730805 kubelet[2668]: E0908 23:49:24.730605 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.730805 kubelet[2668]: E0908 23:49:24.730385 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:49:24.730805 kubelet[2668]: E0908 23:49:24.730760 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:24.755617 kubelet[2668]: I0908 23:49:24.755541 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.755456817 podStartE2EDuration="1.755456817s" podCreationTimestamp="2025-09-08 23:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:24.74734969 +0000 UTC m=+1.144185186" watchObservedRunningTime="2025-09-08 23:49:24.755456817 +0000 UTC m=+1.152292313" Sep 8 23:49:24.765082 kubelet[2668]: I0908 23:49:24.765028 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.765011946 podStartE2EDuration="1.765011946s" podCreationTimestamp="2025-09-08 23:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:24.755884815 +0000 UTC m=+1.152720311" watchObservedRunningTime="2025-09-08 23:49:24.765011946 +0000 UTC m=+1.161847442" Sep 8 23:49:24.774576 kubelet[2668]: I0908 23:49:24.774452 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.774432661 podStartE2EDuration="3.774432661s" podCreationTimestamp="2025-09-08 23:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:24.765208868 +0000 UTC m=+1.162044364" watchObservedRunningTime="2025-09-08 23:49:24.774432661 +0000 UTC m=+1.171268157" Sep 8 23:49:25.724762 kubelet[2668]: E0908 23:49:25.723707 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:25.728840 kubelet[2668]: E0908 23:49:25.728805 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:25.730616 kubelet[2668]: E0908 23:49:25.728825 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:26.028331 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 8 23:49:26.030414 sshd[1739]: Connection closed by 10.0.0.1 port 37272 Sep 8 23:49:26.031192 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:26.036599 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:37272.service: Deactivated successfully. Sep 8 23:49:26.039382 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:49:26.039882 systemd[1]: session-7.scope: Consumed 8.203s CPU time, 256.8M memory peak. Sep 8 23:49:26.043592 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:49:26.044869 systemd-logind[1517]: Removed session 7. Sep 8 23:49:27.450986 kubelet[2668]: I0908 23:49:27.450948 2668 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:49:27.451605 containerd[1537]: time="2025-09-08T23:49:27.451233939Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:49:27.454160 kubelet[2668]: I0908 23:49:27.452569 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:49:27.996849 systemd[1]: Created slice kubepods-besteffort-podc3e0649e_4e45_4a01_a367_b17b42f6ab16.slice - libcontainer container kubepods-besteffort-podc3e0649e_4e45_4a01_a367_b17b42f6ab16.slice. Sep 8 23:49:28.007813 kubelet[2668]: I0908 23:49:28.007738 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-bpf-maps\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008009 kubelet[2668]: I0908 23:49:28.007992 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12a4c1c2-203e-4570-9f2f-7b50858e1461-clustermesh-secrets\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008132 kubelet[2668]: I0908 23:49:28.008109 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-kernel\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008176 kubelet[2668]: I0908 23:49:28.008157 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6p7k\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008211 kubelet[2668]: I0908 23:49:28.008185 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3e0649e-4e45-4a01-a367-b17b42f6ab16-kube-proxy\") pod \"kube-proxy-45hrg\" (UID: \"c3e0649e-4e45-4a01-a367-b17b42f6ab16\") " pod="kube-system/kube-proxy-45hrg" Sep 8 23:49:28.008231 kubelet[2668]: I0908 23:49:28.008208 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cni-path\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008252 kubelet[2668]: I0908 23:49:28.008230 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-net\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008271 kubelet[2668]: I0908 23:49:28.008258 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3e0649e-4e45-4a01-a367-b17b42f6ab16-xtables-lock\") pod \"kube-proxy-45hrg\" (UID: \"c3e0649e-4e45-4a01-a367-b17b42f6ab16\") " pod="kube-system/kube-proxy-45hrg" Sep 8 23:49:28.008292 kubelet[2668]: I0908 23:49:28.008273 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-hostproc\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008292 kubelet[2668]: I0908 23:49:28.008288 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-etc-cni-netd\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008580 kubelet[2668]: I0908 23:49:28.008305 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-config-path\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008580 kubelet[2668]: I0908 23:49:28.008320 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e0649e-4e45-4a01-a367-b17b42f6ab16-lib-modules\") pod \"kube-proxy-45hrg\" (UID: \"c3e0649e-4e45-4a01-a367-b17b42f6ab16\") " pod="kube-system/kube-proxy-45hrg" Sep 8 23:49:28.008580 kubelet[2668]: I0908 23:49:28.008335 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdgh\" (UniqueName: \"kubernetes.io/projected/c3e0649e-4e45-4a01-a367-b17b42f6ab16-kube-api-access-lrdgh\") pod \"kube-proxy-45hrg\" (UID: \"c3e0649e-4e45-4a01-a367-b17b42f6ab16\") " pod="kube-system/kube-proxy-45hrg" Sep 8 23:49:28.008580 kubelet[2668]: I0908 23:49:28.008350 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-run\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.008580 kubelet[2668]: I0908 23:49:28.008366 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-cgroup\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.009128 kubelet[2668]: I0908 23:49:28.008380 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-lib-modules\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.009128 kubelet[2668]: I0908 23:49:28.008424 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-xtables-lock\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.009128 kubelet[2668]: I0908 23:49:28.008454 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-hubble-tls\") pod \"cilium-fvdsv\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " pod="kube-system/cilium-fvdsv" Sep 8 23:49:28.013128 systemd[1]: Created slice kubepods-burstable-pod12a4c1c2_203e_4570_9f2f_7b50858e1461.slice - libcontainer container kubepods-burstable-pod12a4c1c2_203e_4570_9f2f_7b50858e1461.slice. Sep 8 23:49:28.124929 kubelet[2668]: E0908 23:49:28.124799 2668 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:49:28.124929 kubelet[2668]: E0908 23:49:28.124871 2668 projected.go:194] Error preparing data for projected volume kube-api-access-m6p7k for pod kube-system/cilium-fvdsv: configmap "kube-root-ca.crt" not found Sep 8 23:49:28.125306 kubelet[2668]: E0908 23:49:28.125190 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k podName:12a4c1c2-203e-4570-9f2f-7b50858e1461 nodeName:}" failed. No retries permitted until 2025-09-08 23:49:28.625160145 +0000 UTC m=+5.021995641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m6p7k" (UniqueName: "kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k") pod "cilium-fvdsv" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461") : configmap "kube-root-ca.crt" not found Sep 8 23:49:28.133159 kubelet[2668]: E0908 23:49:28.133050 2668 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 8 23:49:28.133159 kubelet[2668]: E0908 23:49:28.133093 2668 projected.go:194] Error preparing data for projected volume kube-api-access-lrdgh for pod kube-system/kube-proxy-45hrg: configmap "kube-root-ca.crt" not found Sep 8 23:49:28.133430 kubelet[2668]: E0908 23:49:28.133348 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3e0649e-4e45-4a01-a367-b17b42f6ab16-kube-api-access-lrdgh podName:c3e0649e-4e45-4a01-a367-b17b42f6ab16 nodeName:}" failed. No retries permitted until 2025-09-08 23:49:28.633324337 +0000 UTC m=+5.030159793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lrdgh" (UniqueName: "kubernetes.io/projected/c3e0649e-4e45-4a01-a367-b17b42f6ab16-kube-api-access-lrdgh") pod "kube-proxy-45hrg" (UID: "c3e0649e-4e45-4a01-a367-b17b42f6ab16") : configmap "kube-root-ca.crt" not found Sep 8 23:49:28.172700 kubelet[2668]: E0908 23:49:28.172377 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:28.689128 systemd[1]: Created slice kubepods-besteffort-pod1cf12ac0_ea01_4425_9bd0_6900d49ccaf0.slice - libcontainer container kubepods-besteffort-pod1cf12ac0_ea01_4425_9bd0_6900d49ccaf0.slice. Sep 8 23:49:28.714260 kubelet[2668]: I0908 23:49:28.714202 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ljph8\" (UID: \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\") " pod="kube-system/cilium-operator-6c4d7847fc-ljph8" Sep 8 23:49:28.714260 kubelet[2668]: I0908 23:49:28.714266 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chzg\" (UniqueName: \"kubernetes.io/projected/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-kube-api-access-7chzg\") pod \"cilium-operator-6c4d7847fc-ljph8\" (UID: \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\") " pod="kube-system/cilium-operator-6c4d7847fc-ljph8" Sep 8 23:49:28.912207 kubelet[2668]: E0908 23:49:28.912156 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:28.913365 containerd[1537]: time="2025-09-08T23:49:28.913295349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45hrg,Uid:c3e0649e-4e45-4a01-a367-b17b42f6ab16,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:28.916571 kubelet[2668]: E0908 23:49:28.916442 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:28.917385 containerd[1537]: time="2025-09-08T23:49:28.917185294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fvdsv,Uid:12a4c1c2-203e-4570-9f2f-7b50858e1461,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:28.939956 containerd[1537]: time="2025-09-08T23:49:28.939838861Z" level=info msg="connecting to shim bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:28.940984 containerd[1537]: time="2025-09-08T23:49:28.940816236Z" level=info msg="connecting to shim bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74" address="unix:///run/containerd/s/827bde8934f2ef3a35f19f8b5ecfec10724178787ddc290025642a63e724af34" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:28.965653 systemd[1]: Started cri-containerd-bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125.scope - libcontainer container bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125. Sep 8 23:49:28.967938 systemd[1]: Started cri-containerd-bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74.scope - libcontainer container bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74. Sep 8 23:49:28.991488 kubelet[2668]: E0908 23:49:28.991436 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:28.993221 containerd[1537]: time="2025-09-08T23:49:28.992009860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ljph8,Uid:1cf12ac0-ea01-4425-9bd0-6900d49ccaf0,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:29.000312 containerd[1537]: time="2025-09-08T23:49:29.000271158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fvdsv,Uid:12a4c1c2-203e-4570-9f2f-7b50858e1461,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\"" Sep 8 23:49:29.002623 containerd[1537]: time="2025-09-08T23:49:29.002583995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45hrg,Uid:c3e0649e-4e45-4a01-a367-b17b42f6ab16,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74\"" Sep 8 23:49:29.004355 kubelet[2668]: E0908 23:49:29.004331 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:29.006060 kubelet[2668]: E0908 23:49:29.005614 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:29.011352 containerd[1537]: time="2025-09-08T23:49:29.010212777Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:49:29.015036 containerd[1537]: time="2025-09-08T23:49:29.014998833Z" level=info msg="CreateContainer within sandbox \"bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:49:29.020069 containerd[1537]: time="2025-09-08T23:49:29.020029335Z" level=info msg="connecting to shim 95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f" address="unix:///run/containerd/s/5145821cf8354d3a61cf8cc2a3caa9e7fe7954c1b230eb219a3efc231c42f11e" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:29.025232 containerd[1537]: time="2025-09-08T23:49:29.025186859Z" level=info msg="Container a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:29.039867 containerd[1537]: time="2025-09-08T23:49:29.039824188Z" level=info msg="CreateContainer within sandbox \"bcdf45393e5aca441c135e87d1027a84a80b1aad2893dea201d1455a449edf74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d\"" Sep 8 23:49:29.040365 containerd[1537]: time="2025-09-08T23:49:29.040306122Z" level=info msg="StartContainer for \"a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d\"" Sep 8 23:49:29.041934 containerd[1537]: time="2025-09-08T23:49:29.041835669Z" level=info msg="connecting to shim a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d" address="unix:///run/containerd/s/827bde8934f2ef3a35f19f8b5ecfec10724178787ddc290025642a63e724af34" protocol=ttrpc version=3 Sep 8 23:49:29.047679 systemd[1]: Started cri-containerd-95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f.scope - libcontainer container 95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f. Sep 8 23:49:29.068668 systemd[1]: Started cri-containerd-a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d.scope - libcontainer container a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d. Sep 8 23:49:29.100644 containerd[1537]: time="2025-09-08T23:49:29.100600116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ljph8,Uid:1cf12ac0-ea01-4425-9bd0-6900d49ccaf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\"" Sep 8 23:49:29.101276 kubelet[2668]: E0908 23:49:29.101252 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:29.114130 containerd[1537]: time="2025-09-08T23:49:29.114090205Z" level=info msg="StartContainer for \"a87d1a0efb5b3bcd026ba706746ee64cb1e6ea24a2468f440d3fda93084d207d\" returns successfully" Sep 8 23:49:29.735026 kubelet[2668]: E0908 23:49:29.734983 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:30.137540 kubelet[2668]: E0908 23:49:30.136454 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:30.157084 kubelet[2668]: I0908 23:49:30.157021 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-45hrg" podStartSLOduration=3.157005749 podStartE2EDuration="3.157005749s" podCreationTimestamp="2025-09-08 23:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:29.804423109 +0000 UTC m=+6.201258605" watchObservedRunningTime="2025-09-08 23:49:30.157005749 +0000 UTC m=+6.553841245" Sep 8 23:49:30.736220 kubelet[2668]: E0908 23:49:30.736149 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:31.738937 kubelet[2668]: E0908 23:49:31.738879 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.395226 kubelet[2668]: E0908 23:49:32.395191 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:32.740728 kubelet[2668]: E0908 23:49:32.740545 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:37.572720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278871268.mount: Deactivated successfully. Sep 8 23:49:38.186398 kubelet[2668]: E0908 23:49:38.186366 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:39.308957 containerd[1537]: time="2025-09-08T23:49:39.308860168Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 8 23:49:39.311490 containerd[1537]: time="2025-09-08T23:49:39.310894220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:39.312494 containerd[1537]: time="2025-09-08T23:49:39.311876748Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:39.313612 containerd[1537]: time="2025-09-08T23:49:39.313575265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.302270759s" Sep 8 23:49:39.313735 containerd[1537]: time="2025-09-08T23:49:39.313719654Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 8 23:49:39.321687 containerd[1537]: time="2025-09-08T23:49:39.321645038Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:49:39.342962 containerd[1537]: time="2025-09-08T23:49:39.342918370Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:49:39.391020 containerd[1537]: time="2025-09-08T23:49:39.390971594Z" level=info msg="Container f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:39.404775 containerd[1537]: time="2025-09-08T23:49:39.404721713Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\"" Sep 8 23:49:39.405484 containerd[1537]: time="2025-09-08T23:49:39.405445700Z" level=info msg="StartContainer for \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\"" Sep 8 23:49:39.406751 containerd[1537]: time="2025-09-08T23:49:39.406707049Z" level=info msg="connecting to shim f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" protocol=ttrpc version=3 Sep 8 23:49:39.464701 systemd[1]: Started cri-containerd-f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0.scope - libcontainer container f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0. Sep 8 23:49:39.509754 kernel: hrtimer: interrupt took 13123366 ns Sep 8 23:49:39.523677 containerd[1537]: time="2025-09-08T23:49:39.523627502Z" level=info msg="StartContainer for \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" returns successfully" Sep 8 23:49:39.538505 systemd[1]: cri-containerd-f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0.scope: Deactivated successfully. Sep 8 23:49:39.558569 containerd[1537]: time="2025-09-08T23:49:39.558385493Z" level=info msg="received exit event container_id:\"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" id:\"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" pid:3097 exited_at:{seconds:1757375379 nanos:550366396}" Sep 8 23:49:39.559496 containerd[1537]: time="2025-09-08T23:49:39.559376301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" id:\"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" pid:3097 exited_at:{seconds:1757375379 nanos:550366396}" Sep 8 23:49:39.564570 update_engine[1524]: I20250908 23:49:39.564502 1524 update_attempter.cc:509] Updating boot flags... Sep 8 23:49:39.762637 kubelet[2668]: E0908 23:49:39.761850 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:39.766114 containerd[1537]: time="2025-09-08T23:49:39.765607376Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:49:39.779361 containerd[1537]: time="2025-09-08T23:49:39.779310339Z" level=info msg="Container 635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:39.785483 containerd[1537]: time="2025-09-08T23:49:39.785433933Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\"" Sep 8 23:49:39.786167 containerd[1537]: time="2025-09-08T23:49:39.786140242Z" level=info msg="StartContainer for \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\"" Sep 8 23:49:39.790236 containerd[1537]: time="2025-09-08T23:49:39.789878730Z" level=info msg="connecting to shim 635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" protocol=ttrpc version=3 Sep 8 23:49:39.819684 systemd[1]: Started cri-containerd-635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378.scope - libcontainer container 635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378. Sep 8 23:49:39.861759 containerd[1537]: time="2025-09-08T23:49:39.861703744Z" level=info msg="StartContainer for \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" returns successfully" Sep 8 23:49:39.874799 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:49:39.875014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:49:39.875535 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:49:39.877364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:49:39.879533 systemd[1]: cri-containerd-635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378.scope: Deactivated successfully. Sep 8 23:49:39.882265 containerd[1537]: time="2025-09-08T23:49:39.881270440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" id:\"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" pid:3159 exited_at:{seconds:1757375379 nanos:880929105}" Sep 8 23:49:39.882265 containerd[1537]: time="2025-09-08T23:49:39.881271080Z" level=info msg="received exit event container_id:\"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" id:\"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" pid:3159 exited_at:{seconds:1757375379 nanos:880929105}" Sep 8 23:49:39.901005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:49:40.390110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0-rootfs.mount: Deactivated successfully. Sep 8 23:49:40.585617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988654239.mount: Deactivated successfully. Sep 8 23:49:40.765360 kubelet[2668]: E0908 23:49:40.765167 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:40.775668 containerd[1537]: time="2025-09-08T23:49:40.775617934Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:49:40.791489 containerd[1537]: time="2025-09-08T23:49:40.791280706Z" level=info msg="Container 5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:40.798232 containerd[1537]: time="2025-09-08T23:49:40.798184795Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\"" Sep 8 23:49:40.798748 containerd[1537]: time="2025-09-08T23:49:40.798707399Z" level=info msg="StartContainer for \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\"" Sep 8 23:49:40.800139 containerd[1537]: time="2025-09-08T23:49:40.800113463Z" level=info msg="connecting to shim 5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" protocol=ttrpc version=3 Sep 8 23:49:40.825671 systemd[1]: Started cri-containerd-5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a.scope - libcontainer container 5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a. Sep 8 23:49:40.860208 containerd[1537]: time="2025-09-08T23:49:40.860163327Z" level=info msg="StartContainer for \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" returns successfully" Sep 8 23:49:40.863955 systemd[1]: cri-containerd-5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a.scope: Deactivated successfully. Sep 8 23:49:40.875552 containerd[1537]: time="2025-09-08T23:49:40.875507881Z" level=info msg="received exit event container_id:\"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" id:\"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" pid:3210 exited_at:{seconds:1757375380 nanos:875254858}" Sep 8 23:49:40.875837 containerd[1537]: time="2025-09-08T23:49:40.875637352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" id:\"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" pid:3210 exited_at:{seconds:1757375380 nanos:875254858}" Sep 8 23:49:41.769757 kubelet[2668]: E0908 23:49:41.769723 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:41.774999 containerd[1537]: time="2025-09-08T23:49:41.774961510Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:49:41.783665 containerd[1537]: time="2025-09-08T23:49:41.783623436Z" level=info msg="Container fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:41.791069 containerd[1537]: time="2025-09-08T23:49:41.791030363Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\"" Sep 8 23:49:41.791503 containerd[1537]: time="2025-09-08T23:49:41.791479134Z" level=info msg="StartContainer for \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\"" Sep 8 23:49:41.792319 containerd[1537]: time="2025-09-08T23:49:41.792296762Z" level=info msg="connecting to shim fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" protocol=ttrpc version=3 Sep 8 23:49:41.822688 systemd[1]: Started cri-containerd-fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5.scope - libcontainer container fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5. Sep 8 23:49:41.850081 systemd[1]: cri-containerd-fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5.scope: Deactivated successfully. Sep 8 23:49:41.851073 containerd[1537]: time="2025-09-08T23:49:41.851034886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" id:\"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" pid:3252 exited_at:{seconds:1757375381 nanos:850795621}" Sep 8 23:49:41.869284 containerd[1537]: time="2025-09-08T23:49:41.869244601Z" level=info msg="received exit event container_id:\"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" id:\"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" pid:3252 exited_at:{seconds:1757375381 nanos:850795621}" Sep 8 23:49:41.875817 containerd[1537]: time="2025-09-08T23:49:41.875781863Z" level=info msg="StartContainer for \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" returns successfully" Sep 8 23:49:41.886973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5-rootfs.mount: Deactivated successfully. Sep 8 23:49:42.776026 kubelet[2668]: E0908 23:49:42.775976 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:42.785852 containerd[1537]: time="2025-09-08T23:49:42.782956020Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:49:42.800136 containerd[1537]: time="2025-09-08T23:49:42.800092552Z" level=info msg="Container da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:42.806890 containerd[1537]: time="2025-09-08T23:49:42.806854307Z" level=info msg="CreateContainer within sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\"" Sep 8 23:49:42.807312 containerd[1537]: time="2025-09-08T23:49:42.807271762Z" level=info msg="StartContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\"" Sep 8 23:49:42.808234 containerd[1537]: time="2025-09-08T23:49:42.808210466Z" level=info msg="connecting to shim da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775" address="unix:///run/containerd/s/5e9423592178061137618fede46392f2f69e62e2f1180d6ac632199963a30663" protocol=ttrpc version=3 Sep 8 23:49:42.837668 systemd[1]: Started cri-containerd-da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775.scope - libcontainer container da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775. Sep 8 23:49:42.867753 containerd[1537]: time="2025-09-08T23:49:42.867503031Z" level=info msg="StartContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" returns successfully" Sep 8 23:49:42.947962 containerd[1537]: time="2025-09-08T23:49:42.947921810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" id:\"75f0b0f66fbe0a0190b4b06ae72947362a1699b41877ba2a572118ab8c024df0\" pid:3321 exited_at:{seconds:1757375382 nanos:947624148}" Sep 8 23:49:43.003582 kubelet[2668]: I0908 23:49:43.003551 2668 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:49:43.044793 systemd[1]: Created slice kubepods-burstable-podc7838be9_c635_4484_bf04_64efda014302.slice - libcontainer container kubepods-burstable-podc7838be9_c635_4484_bf04_64efda014302.slice. Sep 8 23:49:43.052270 systemd[1]: Created slice kubepods-burstable-pod6a8fa9aa_b4fa_4db0_954b_418f21066686.slice - libcontainer container kubepods-burstable-pod6a8fa9aa_b4fa_4db0_954b_418f21066686.slice. Sep 8 23:49:43.119663 kubelet[2668]: I0908 23:49:43.119624 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a8fa9aa-b4fa-4db0-954b-418f21066686-config-volume\") pod \"coredns-674b8bbfcf-tlxbz\" (UID: \"6a8fa9aa-b4fa-4db0-954b-418f21066686\") " pod="kube-system/coredns-674b8bbfcf-tlxbz" Sep 8 23:49:43.119663 kubelet[2668]: I0908 23:49:43.119667 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zsz\" (UniqueName: \"kubernetes.io/projected/6a8fa9aa-b4fa-4db0-954b-418f21066686-kube-api-access-q8zsz\") pod \"coredns-674b8bbfcf-tlxbz\" (UID: \"6a8fa9aa-b4fa-4db0-954b-418f21066686\") " pod="kube-system/coredns-674b8bbfcf-tlxbz" Sep 8 23:49:43.119855 kubelet[2668]: I0908 23:49:43.119697 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dg2\" (UniqueName: \"kubernetes.io/projected/c7838be9-c635-4484-bf04-64efda014302-kube-api-access-c2dg2\") pod \"coredns-674b8bbfcf-ptxzp\" (UID: \"c7838be9-c635-4484-bf04-64efda014302\") " pod="kube-system/coredns-674b8bbfcf-ptxzp" Sep 8 23:49:43.119855 kubelet[2668]: I0908 23:49:43.119716 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7838be9-c635-4484-bf04-64efda014302-config-volume\") pod \"coredns-674b8bbfcf-ptxzp\" (UID: \"c7838be9-c635-4484-bf04-64efda014302\") " pod="kube-system/coredns-674b8bbfcf-ptxzp" Sep 8 23:49:43.349509 kubelet[2668]: E0908 23:49:43.349220 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:43.350382 containerd[1537]: time="2025-09-08T23:49:43.350351391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ptxzp,Uid:c7838be9-c635-4484-bf04-64efda014302,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:43.356659 kubelet[2668]: E0908 23:49:43.356634 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:43.357347 containerd[1537]: time="2025-09-08T23:49:43.357225205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tlxbz,Uid:6a8fa9aa-b4fa-4db0-954b-418f21066686,Namespace:kube-system,Attempt:0,}" Sep 8 23:49:43.784681 kubelet[2668]: E0908 23:49:43.783771 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:44.785008 kubelet[2668]: E0908 23:49:44.784843 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:45.090721 containerd[1537]: time="2025-09-08T23:49:45.090007450Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:45.090721 containerd[1537]: time="2025-09-08T23:49:45.090590221Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 8 23:49:45.091497 containerd[1537]: time="2025-09-08T23:49:45.091425340Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:49:45.092684 containerd[1537]: time="2025-09-08T23:49:45.092649359Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.770964044s" Sep 8 23:49:45.092684 containerd[1537]: time="2025-09-08T23:49:45.092681997Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 8 23:49:45.097677 containerd[1537]: time="2025-09-08T23:49:45.097647232Z" level=info msg="CreateContainer within sandbox \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:49:45.104504 containerd[1537]: time="2025-09-08T23:49:45.103973360Z" level=info msg="Container b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:45.109319 containerd[1537]: time="2025-09-08T23:49:45.109282217Z" level=info msg="CreateContainer within sandbox \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\"" Sep 8 23:49:45.110561 containerd[1537]: time="2025-09-08T23:49:45.110528876Z" level=info msg="StartContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\"" Sep 8 23:49:45.111497 containerd[1537]: time="2025-09-08T23:49:45.111453110Z" level=info msg="connecting to shim b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311" address="unix:///run/containerd/s/5145821cf8354d3a61cf8cc2a3caa9e7fe7954c1b230eb219a3efc231c42f11e" protocol=ttrpc version=3 Sep 8 23:49:45.134636 systemd[1]: Started cri-containerd-b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311.scope - libcontainer container b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311. Sep 8 23:49:45.196122 containerd[1537]: time="2025-09-08T23:49:45.196086450Z" level=info msg="StartContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" returns successfully" Sep 8 23:49:45.788420 kubelet[2668]: E0908 23:49:45.788378 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:45.788802 kubelet[2668]: E0908 23:49:45.788672 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:45.806414 kubelet[2668]: I0908 23:49:45.806358 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fvdsv" podStartSLOduration=8.49318918 podStartE2EDuration="18.806337305s" podCreationTimestamp="2025-09-08 23:49:27 +0000 UTC" firstStartedPulling="2025-09-08 23:49:29.008223773 +0000 UTC m=+5.405059269" lastFinishedPulling="2025-09-08 23:49:39.321371938 +0000 UTC m=+15.718207394" observedRunningTime="2025-09-08 23:49:43.817191993 +0000 UTC m=+20.214027489" watchObservedRunningTime="2025-09-08 23:49:45.806337305 +0000 UTC m=+22.203172801" Sep 8 23:49:45.806586 kubelet[2668]: I0908 23:49:45.806532 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ljph8" podStartSLOduration=1.816226007 podStartE2EDuration="17.806527536s" podCreationTimestamp="2025-09-08 23:49:28 +0000 UTC" firstStartedPulling="2025-09-08 23:49:29.103287664 +0000 UTC m=+5.500123160" lastFinishedPulling="2025-09-08 23:49:45.093589193 +0000 UTC m=+21.490424689" observedRunningTime="2025-09-08 23:49:45.806193432 +0000 UTC m=+22.203028928" watchObservedRunningTime="2025-09-08 23:49:45.806527536 +0000 UTC m=+22.203362992" Sep 8 23:49:46.792312 kubelet[2668]: E0908 23:49:46.791758 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:48.886103 systemd-networkd[1452]: cilium_host: Link UP Sep 8 23:49:48.886229 systemd-networkd[1452]: cilium_net: Link UP Sep 8 23:49:48.886346 systemd-networkd[1452]: cilium_net: Gained carrier Sep 8 23:49:48.886451 systemd-networkd[1452]: cilium_host: Gained carrier Sep 8 23:49:48.965504 systemd-networkd[1452]: cilium_vxlan: Link UP Sep 8 23:49:48.965509 systemd-networkd[1452]: cilium_vxlan: Gained carrier Sep 8 23:49:49.085638 systemd-networkd[1452]: cilium_host: Gained IPv6LL Sep 8 23:49:49.222808 kernel: NET: Registered PF_ALG protocol family Sep 8 23:49:49.748675 systemd-networkd[1452]: cilium_net: Gained IPv6LL Sep 8 23:49:49.798165 systemd-networkd[1452]: lxc_health: Link UP Sep 8 23:49:49.801038 systemd-networkd[1452]: lxc_health: Gained carrier Sep 8 23:49:49.915502 kernel: eth0: renamed from tmp50fdb Sep 8 23:49:49.927906 kernel: eth0: renamed from tmpa6d4c Sep 8 23:49:49.927410 systemd-networkd[1452]: lxc760ee96d1752: Link UP Sep 8 23:49:49.928369 systemd-networkd[1452]: lxcb04d7f10b8ce: Link UP Sep 8 23:49:49.928642 systemd-networkd[1452]: lxc760ee96d1752: Gained carrier Sep 8 23:49:49.929622 systemd-networkd[1452]: lxcb04d7f10b8ce: Gained carrier Sep 8 23:49:50.453635 systemd-networkd[1452]: cilium_vxlan: Gained IPv6LL Sep 8 23:49:50.922885 kubelet[2668]: E0908 23:49:50.922628 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:50.964650 systemd-networkd[1452]: lxc760ee96d1752: Gained IPv6LL Sep 8 23:49:51.668648 systemd-networkd[1452]: lxcb04d7f10b8ce: Gained IPv6LL Sep 8 23:49:51.796590 systemd-networkd[1452]: lxc_health: Gained IPv6LL Sep 8 23:49:51.801551 kubelet[2668]: E0908 23:49:51.801506 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:51.868297 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:60706.service - OpenSSH per-connection server daemon (10.0.0.1:60706). Sep 8 23:49:51.925486 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 60706 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:49:51.928208 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:51.932511 systemd-logind[1517]: New session 8 of user core. Sep 8 23:49:51.942225 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:49:52.075254 sshd[3854]: Connection closed by 10.0.0.1 port 60706 Sep 8 23:49:52.075577 sshd-session[3851]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:52.079698 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:60706.service: Deactivated successfully. Sep 8 23:49:52.081567 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:49:52.082354 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:49:52.083700 systemd-logind[1517]: Removed session 8. Sep 8 23:49:52.801939 kubelet[2668]: E0908 23:49:52.801893 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:53.589575 containerd[1537]: time="2025-09-08T23:49:53.587265508Z" level=info msg="connecting to shim 50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c" address="unix:///run/containerd/s/a7c8e181f6dafb2e06d35b834dbe4d9fedc14c8b662907c6e4f309428f3cd2e6" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:53.589575 containerd[1537]: time="2025-09-08T23:49:53.588848902Z" level=info msg="connecting to shim a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df" address="unix:///run/containerd/s/ff6c165b24e356c08f9a4a7effc200b212c9ea43f702d5004199379336896e89" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:49:53.616669 systemd[1]: Started cri-containerd-a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df.scope - libcontainer container a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df. Sep 8 23:49:53.619606 systemd[1]: Started cri-containerd-50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c.scope - libcontainer container 50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c. Sep 8 23:49:53.632854 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:49:53.648606 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:49:53.666377 containerd[1537]: time="2025-09-08T23:49:53.666275779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ptxzp,Uid:c7838be9-c635-4484-bf04-64efda014302,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df\"" Sep 8 23:49:53.667190 kubelet[2668]: E0908 23:49:53.667158 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:53.675019 containerd[1537]: time="2025-09-08T23:49:53.674984803Z" level=info msg="CreateContainer within sandbox \"a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:49:53.683103 containerd[1537]: time="2025-09-08T23:49:53.682771013Z" level=info msg="Container 691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:53.691165 containerd[1537]: time="2025-09-08T23:49:53.691114847Z" level=info msg="CreateContainer within sandbox \"a6d4c0e77846cf3d0508711f651c2039d3cfc809232e99de206690adc37e26df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a\"" Sep 8 23:49:53.692497 containerd[1537]: time="2025-09-08T23:49:53.691649072Z" level=info msg="StartContainer for \"691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a\"" Sep 8 23:49:53.692497 containerd[1537]: time="2025-09-08T23:49:53.692441528Z" level=info msg="connecting to shim 691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a" address="unix:///run/containerd/s/ff6c165b24e356c08f9a4a7effc200b212c9ea43f702d5004199379336896e89" protocol=ttrpc version=3 Sep 8 23:49:53.703101 containerd[1537]: time="2025-09-08T23:49:53.703063335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tlxbz,Uid:6a8fa9aa-b4fa-4db0-954b-418f21066686,Namespace:kube-system,Attempt:0,} returns sandbox id \"50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c\"" Sep 8 23:49:53.704030 kubelet[2668]: E0908 23:49:53.704006 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:53.711701 containerd[1537]: time="2025-09-08T23:49:53.710535595Z" level=info msg="CreateContainer within sandbox \"50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:49:53.722300 containerd[1537]: time="2025-09-08T23:49:53.722251489Z" level=info msg="Container f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:49:53.722702 systemd[1]: Started cri-containerd-691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a.scope - libcontainer container 691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a. Sep 8 23:49:53.727678 containerd[1537]: time="2025-09-08T23:49:53.727629011Z" level=info msg="CreateContainer within sandbox \"50fdb3edae868b89c5cc043acc4ee3deb93dd2e98ea4fafe5ada418a0691a29c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b\"" Sep 8 23:49:53.729140 containerd[1537]: time="2025-09-08T23:49:53.729115767Z" level=info msg="StartContainer for \"f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b\"" Sep 8 23:49:53.730189 containerd[1537]: time="2025-09-08T23:49:53.730144937Z" level=info msg="connecting to shim f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b" address="unix:///run/containerd/s/a7c8e181f6dafb2e06d35b834dbe4d9fedc14c8b662907c6e4f309428f3cd2e6" protocol=ttrpc version=3 Sep 8 23:49:53.756697 systemd[1]: Started cri-containerd-f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b.scope - libcontainer container f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b. Sep 8 23:49:53.762571 containerd[1537]: time="2025-09-08T23:49:53.762534382Z" level=info msg="StartContainer for \"691c7a4723bb9eab59db566cab063e15692c18fb437f30a267a0755345ec459a\" returns successfully" Sep 8 23:49:53.794163 containerd[1537]: time="2025-09-08T23:49:53.794120571Z" level=info msg="StartContainer for \"f31fd7f5ffccd53cf33ce3ace6056a25500b3e8e2dd13d1df2b2a1ae1928f64b\" returns successfully" Sep 8 23:49:53.819273 kubelet[2668]: E0908 23:49:53.819166 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:53.824502 kubelet[2668]: E0908 23:49:53.824454 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:53.852204 kubelet[2668]: I0908 23:49:53.851638 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tlxbz" podStartSLOduration=25.851608837 podStartE2EDuration="25.851608837s" podCreationTimestamp="2025-09-08 23:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:53.83453554 +0000 UTC m=+30.231371036" watchObservedRunningTime="2025-09-08 23:49:53.851608837 +0000 UTC m=+30.248444293" Sep 8 23:49:53.852204 kubelet[2668]: I0908 23:49:53.851895 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ptxzp" podStartSLOduration=25.851891028 podStartE2EDuration="25.851891028s" podCreationTimestamp="2025-09-08 23:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:49:53.850270636 +0000 UTC m=+30.247106132" watchObservedRunningTime="2025-09-08 23:49:53.851891028 +0000 UTC m=+30.248726524" Sep 8 23:49:54.827132 kubelet[2668]: E0908 23:49:54.827048 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:54.828801 kubelet[2668]: E0908 23:49:54.827269 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:55.828486 kubelet[2668]: E0908 23:49:55.828417 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:55.829546 kubelet[2668]: E0908 23:49:55.828930 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:49:57.090991 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:60714.service - OpenSSH per-connection server daemon (10.0.0.1:60714). Sep 8 23:49:57.141428 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 60714 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:49:57.142790 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:49:57.146535 systemd-logind[1517]: New session 9 of user core. Sep 8 23:49:57.157647 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:49:57.283751 sshd[4052]: Connection closed by 10.0.0.1 port 60714 Sep 8 23:49:57.284136 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Sep 8 23:49:57.288350 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:60714.service: Deactivated successfully. Sep 8 23:49:57.291095 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:49:57.293525 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:49:57.295021 systemd-logind[1517]: Removed session 9. Sep 8 23:50:02.305042 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). Sep 8 23:50:02.365599 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:02.367146 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:02.371429 systemd-logind[1517]: New session 10 of user core. Sep 8 23:50:02.378735 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:50:02.508241 sshd[4073]: Connection closed by 10.0.0.1 port 60330 Sep 8 23:50:02.508626 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:02.512208 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:60330.service: Deactivated successfully. Sep 8 23:50:02.515250 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:50:02.519429 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:50:02.521486 systemd-logind[1517]: Removed session 10. Sep 8 23:50:07.529675 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:60342.service - OpenSSH per-connection server daemon (10.0.0.1:60342). Sep 8 23:50:07.595283 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 60342 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:07.597846 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:07.604630 systemd-logind[1517]: New session 11 of user core. Sep 8 23:50:07.614656 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:50:07.755762 sshd[4090]: Connection closed by 10.0.0.1 port 60342 Sep 8 23:50:07.757791 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:07.765257 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:60342.service: Deactivated successfully. Sep 8 23:50:07.767144 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:50:07.768135 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:50:07.770967 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). Sep 8 23:50:07.772744 systemd-logind[1517]: Removed session 11. Sep 8 23:50:07.842015 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:07.843435 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:07.848819 systemd-logind[1517]: New session 12 of user core. Sep 8 23:50:07.859732 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:50:08.021399 sshd[4107]: Connection closed by 10.0.0.1 port 60346 Sep 8 23:50:08.021969 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:08.035747 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:60346.service: Deactivated successfully. Sep 8 23:50:08.037393 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:50:08.038909 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:50:08.041808 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:60356.service - OpenSSH per-connection server daemon (10.0.0.1:60356). Sep 8 23:50:08.044898 systemd-logind[1517]: Removed session 12. Sep 8 23:50:08.110102 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 60356 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:08.111426 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:08.116234 systemd-logind[1517]: New session 13 of user core. Sep 8 23:50:08.126698 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:50:08.251631 sshd[4122]: Connection closed by 10.0.0.1 port 60356 Sep 8 23:50:08.251977 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:08.255661 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:50:08.255759 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:60356.service: Deactivated successfully. Sep 8 23:50:08.257688 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:50:08.260759 systemd-logind[1517]: Removed session 13. Sep 8 23:50:13.272559 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:53816.service - OpenSSH per-connection server daemon (10.0.0.1:53816). Sep 8 23:50:13.331206 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 53816 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:13.332707 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:13.336744 systemd-logind[1517]: New session 14 of user core. Sep 8 23:50:13.344665 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:50:13.456522 sshd[4138]: Connection closed by 10.0.0.1 port 53816 Sep 8 23:50:13.456730 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:13.460183 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:53816.service: Deactivated successfully. Sep 8 23:50:13.462973 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:50:13.463824 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:50:13.465612 systemd-logind[1517]: Removed session 14. Sep 8 23:50:18.479856 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:53820.service - OpenSSH per-connection server daemon (10.0.0.1:53820). Sep 8 23:50:18.555719 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 53820 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:18.557615 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:18.561405 systemd-logind[1517]: New session 15 of user core. Sep 8 23:50:18.571646 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:50:18.683519 sshd[4154]: Connection closed by 10.0.0.1 port 53820 Sep 8 23:50:18.683342 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:18.699418 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:53820.service: Deactivated successfully. Sep 8 23:50:18.701169 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:50:18.702975 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:50:18.703893 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:53822.service - OpenSSH per-connection server daemon (10.0.0.1:53822). Sep 8 23:50:18.705008 systemd-logind[1517]: Removed session 15. Sep 8 23:50:18.761811 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 53822 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:18.763081 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:18.767143 systemd-logind[1517]: New session 16 of user core. Sep 8 23:50:18.776621 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:50:19.023736 sshd[4171]: Connection closed by 10.0.0.1 port 53822 Sep 8 23:50:19.023970 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:19.036269 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:53822.service: Deactivated successfully. Sep 8 23:50:19.038055 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:50:19.038953 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:50:19.043189 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:53836.service - OpenSSH per-connection server daemon (10.0.0.1:53836). Sep 8 23:50:19.044398 systemd-logind[1517]: Removed session 16. Sep 8 23:50:19.106054 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 53836 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:19.107333 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:19.111384 systemd-logind[1517]: New session 17 of user core. Sep 8 23:50:19.118628 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:50:19.748635 sshd[4185]: Connection closed by 10.0.0.1 port 53836 Sep 8 23:50:19.749280 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:19.757925 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:53836.service: Deactivated successfully. Sep 8 23:50:19.761258 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:50:19.763265 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:50:19.766522 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:53840.service - OpenSSH per-connection server daemon (10.0.0.1:53840). Sep 8 23:50:19.769098 systemd-logind[1517]: Removed session 17. Sep 8 23:50:19.824669 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 53840 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:19.825937 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:19.829851 systemd-logind[1517]: New session 18 of user core. Sep 8 23:50:19.838622 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:50:20.062020 sshd[4209]: Connection closed by 10.0.0.1 port 53840 Sep 8 23:50:20.063766 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:20.074332 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:53840.service: Deactivated successfully. Sep 8 23:50:20.076850 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:50:20.079332 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:50:20.081634 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:56202.service - OpenSSH per-connection server daemon (10.0.0.1:56202). Sep 8 23:50:20.082926 systemd-logind[1517]: Removed session 18. Sep 8 23:50:20.141495 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 56202 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:20.142872 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:20.147519 systemd-logind[1517]: New session 19 of user core. Sep 8 23:50:20.158639 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:50:20.270882 sshd[4223]: Connection closed by 10.0.0.1 port 56202 Sep 8 23:50:20.271503 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:20.274791 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:56202.service: Deactivated successfully. Sep 8 23:50:20.278056 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:50:20.279536 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:50:20.281095 systemd-logind[1517]: Removed session 19. Sep 8 23:50:25.286869 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:56218.service - OpenSSH per-connection server daemon (10.0.0.1:56218). Sep 8 23:50:25.346563 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:25.347682 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:25.352251 systemd-logind[1517]: New session 20 of user core. Sep 8 23:50:25.366618 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:50:25.479482 sshd[4245]: Connection closed by 10.0.0.1 port 56218 Sep 8 23:50:25.480751 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:25.484290 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:56218.service: Deactivated successfully. Sep 8 23:50:25.485986 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:50:25.487447 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:50:25.488308 systemd-logind[1517]: Removed session 20. Sep 8 23:50:30.494992 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:36610.service - OpenSSH per-connection server daemon (10.0.0.1:36610). Sep 8 23:50:30.562723 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 36610 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:30.564512 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:30.569079 systemd-logind[1517]: New session 21 of user core. Sep 8 23:50:30.585614 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:50:30.709578 sshd[4263]: Connection closed by 10.0.0.1 port 36610 Sep 8 23:50:30.709913 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:30.722308 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:36610.service: Deactivated successfully. Sep 8 23:50:30.726012 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:50:30.728001 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:50:30.732858 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:36624.service - OpenSSH per-connection server daemon (10.0.0.1:36624). Sep 8 23:50:30.736508 systemd-logind[1517]: Removed session 21. Sep 8 23:50:30.791602 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 36624 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:30.792906 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:30.796952 systemd-logind[1517]: New session 22 of user core. Sep 8 23:50:30.808612 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:50:32.498625 containerd[1537]: time="2025-09-08T23:50:32.497706643Z" level=info msg="StopContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" with timeout 30 (s)" Sep 8 23:50:32.499963 containerd[1537]: time="2025-09-08T23:50:32.499849358Z" level=info msg="Stop container \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" with signal terminated" Sep 8 23:50:32.514859 systemd[1]: cri-containerd-b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311.scope: Deactivated successfully. Sep 8 23:50:32.516507 containerd[1537]: time="2025-09-08T23:50:32.516247481Z" level=info msg="received exit event container_id:\"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" id:\"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" pid:3446 exited_at:{seconds:1757375432 nanos:516029481}" Sep 8 23:50:32.516507 containerd[1537]: time="2025-09-08T23:50:32.516351760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" id:\"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" pid:3446 exited_at:{seconds:1757375432 nanos:516029481}" Sep 8 23:50:32.529498 containerd[1537]: time="2025-09-08T23:50:32.529022971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" id:\"8f8a87ec05e31364720ad69e44d236966809a8a152195a7303f5000bdfdf327d\" pid:4309 exited_at:{seconds:1757375432 nanos:528755652}" Sep 8 23:50:32.531183 containerd[1537]: time="2025-09-08T23:50:32.531144366Z" level=info msg="StopContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" with timeout 2 (s)" Sep 8 23:50:32.531505 containerd[1537]: time="2025-09-08T23:50:32.531475005Z" level=info msg="Stop container \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" with signal terminated" Sep 8 23:50:32.534799 containerd[1537]: time="2025-09-08T23:50:32.534757918Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:50:32.538597 systemd-networkd[1452]: lxc_health: Link DOWN Sep 8 23:50:32.538603 systemd-networkd[1452]: lxc_health: Lost carrier Sep 8 23:50:32.540245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311-rootfs.mount: Deactivated successfully. Sep 8 23:50:32.551771 systemd[1]: cri-containerd-da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775.scope: Deactivated successfully. Sep 8 23:50:32.552050 systemd[1]: cri-containerd-da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775.scope: Consumed 6.271s CPU time, 122.8M memory peak, 128K read from disk, 12.9M written to disk. Sep 8 23:50:32.553622 containerd[1537]: time="2025-09-08T23:50:32.553494315Z" level=info msg="StopContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" returns successfully" Sep 8 23:50:32.554110 containerd[1537]: time="2025-09-08T23:50:32.554010433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" pid:3291 exited_at:{seconds:1757375432 nanos:553724754}" Sep 8 23:50:32.554340 containerd[1537]: time="2025-09-08T23:50:32.554212713Z" level=info msg="received exit event container_id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" id:\"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" pid:3291 exited_at:{seconds:1757375432 nanos:553724754}" Sep 8 23:50:32.559015 containerd[1537]: time="2025-09-08T23:50:32.558974182Z" level=info msg="StopPodSandbox for \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\"" Sep 8 23:50:32.574483 containerd[1537]: time="2025-09-08T23:50:32.574427906Z" level=info msg="Container to stop \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.579551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775-rootfs.mount: Deactivated successfully. Sep 8 23:50:32.582187 systemd[1]: cri-containerd-95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f.scope: Deactivated successfully. Sep 8 23:50:32.583693 containerd[1537]: time="2025-09-08T23:50:32.583625445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" id:\"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" pid:2888 exit_status:137 exited_at:{seconds:1757375432 nanos:583264726}" Sep 8 23:50:32.588150 containerd[1537]: time="2025-09-08T23:50:32.587816515Z" level=info msg="StopContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" returns successfully" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588261714Z" level=info msg="StopPodSandbox for \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\"" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588319154Z" level=info msg="Container to stop \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588331434Z" level=info msg="Container to stop \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588340514Z" level=info msg="Container to stop \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588349234Z" level=info msg="Container to stop \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.588376 containerd[1537]: time="2025-09-08T23:50:32.588358354Z" level=info msg="Container to stop \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:50:32.594342 systemd[1]: cri-containerd-bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125.scope: Deactivated successfully. Sep 8 23:50:32.611191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f-rootfs.mount: Deactivated successfully. Sep 8 23:50:32.613840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125-rootfs.mount: Deactivated successfully. Sep 8 23:50:32.626969 containerd[1537]: time="2025-09-08T23:50:32.626922345Z" level=info msg="shim disconnected" id=95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f namespace=k8s.io Sep 8 23:50:32.640648 containerd[1537]: time="2025-09-08T23:50:32.626961105Z" level=warning msg="cleaning up after shim disconnected" id=95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f namespace=k8s.io Sep 8 23:50:32.640648 containerd[1537]: time="2025-09-08T23:50:32.640644833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:32.640751 containerd[1537]: time="2025-09-08T23:50:32.637475041Z" level=info msg="shim disconnected" id=bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125 namespace=k8s.io Sep 8 23:50:32.640796 containerd[1537]: time="2025-09-08T23:50:32.640753113Z" level=warning msg="cleaning up after shim disconnected" id=bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125 namespace=k8s.io Sep 8 23:50:32.640796 containerd[1537]: time="2025-09-08T23:50:32.640787753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:50:32.653311 containerd[1537]: time="2025-09-08T23:50:32.653253884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" id:\"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" pid:2818 exit_status:137 exited_at:{seconds:1757375432 nanos:594648580}" Sep 8 23:50:32.654993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f-shm.mount: Deactivated successfully. Sep 8 23:50:32.655222 containerd[1537]: time="2025-09-08T23:50:32.655191000Z" level=info msg="TearDown network for sandbox \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" successfully" Sep 8 23:50:32.655222 containerd[1537]: time="2025-09-08T23:50:32.655212160Z" level=info msg="StopPodSandbox for \"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" returns successfully" Sep 8 23:50:32.657117 containerd[1537]: time="2025-09-08T23:50:32.656870036Z" level=info msg="TearDown network for sandbox \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" successfully" Sep 8 23:50:32.657117 containerd[1537]: time="2025-09-08T23:50:32.656898356Z" level=info msg="StopPodSandbox for \"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" returns successfully" Sep 8 23:50:32.662556 containerd[1537]: time="2025-09-08T23:50:32.662001024Z" level=info msg="received exit event sandbox_id:\"bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125\" exit_status:137 exited_at:{seconds:1757375432 nanos:594648580}" Sep 8 23:50:32.662556 containerd[1537]: time="2025-09-08T23:50:32.662072224Z" level=info msg="received exit event sandbox_id:\"95532255d93dcd2532e7c9b6ebae787e7e705ae6fd18451d6869a54104aebd3f\" exit_status:137 exited_at:{seconds:1757375432 nanos:583264726}" Sep 8 23:50:32.744305 kubelet[2668]: I0908 23:50:32.744272 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-xtables-lock\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.744768 kubelet[2668]: I0908 23:50:32.744751 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-bpf-maps\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.744842 kubelet[2668]: I0908 23:50:32.744828 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-kernel\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.744940 kubelet[2668]: I0908 23:50:32.744927 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6p7k\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745004 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-lib-modules\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745028 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-hostproc\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745047 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-config-path\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745061 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-run\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745079 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-hubble-tls\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745249 kubelet[2668]: I0908 23:50:32.745097 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12a4c1c2-203e-4570-9f2f-7b50858e1461-clustermesh-secrets\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745112 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-cgroup\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745127 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cni-path\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745142 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7chzg\" (UniqueName: \"kubernetes.io/projected/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-kube-api-access-7chzg\") pod \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\" (UID: \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745157 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-cilium-config-path\") pod \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\" (UID: \"1cf12ac0-ea01-4425-9bd0-6900d49ccaf0\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745175 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-net\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.745389 kubelet[2668]: I0908 23:50:32.745191 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-etc-cni-netd\") pod \"12a4c1c2-203e-4570-9f2f-7b50858e1461\" (UID: \"12a4c1c2-203e-4570-9f2f-7b50858e1461\") " Sep 8 23:50:32.746200 kubelet[2668]: I0908 23:50:32.745991 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.746200 kubelet[2668]: I0908 23:50:32.745993 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.746292 kubelet[2668]: I0908 23:50:32.746214 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.746322 kubelet[2668]: I0908 23:50:32.746298 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-hostproc" (OuterVolumeSpecName: "hostproc") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.746408 kubelet[2668]: I0908 23:50:32.746386 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.751229 kubelet[2668]: I0908 23:50:32.750550 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:50:32.751946 kubelet[2668]: I0908 23:50:32.751845 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k" (OuterVolumeSpecName: "kube-api-access-m6p7k") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "kube-api-access-m6p7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:32.751946 kubelet[2668]: I0908 23:50:32.751898 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.752034 kubelet[2668]: I0908 23:50:32.751952 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:32.752034 kubelet[2668]: I0908 23:50:32.751990 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.752034 kubelet[2668]: I0908 23:50:32.752008 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.752034 kubelet[2668]: I0908 23:50:32.752023 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cni-path" (OuterVolumeSpecName: "cni-path") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.752113 kubelet[2668]: I0908 23:50:32.752036 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:50:32.752945 kubelet[2668]: I0908 23:50:32.752906 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a4c1c2-203e-4570-9f2f-7b50858e1461-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "12a4c1c2-203e-4570-9f2f-7b50858e1461" (UID: "12a4c1c2-203e-4570-9f2f-7b50858e1461"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:50:32.753658 kubelet[2668]: I0908 23:50:32.753630 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-kube-api-access-7chzg" (OuterVolumeSpecName: "kube-api-access-7chzg") pod "1cf12ac0-ea01-4425-9bd0-6900d49ccaf0" (UID: "1cf12ac0-ea01-4425-9bd0-6900d49ccaf0"). InnerVolumeSpecName "kube-api-access-7chzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:50:32.753896 kubelet[2668]: I0908 23:50:32.753868 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cf12ac0-ea01-4425-9bd0-6900d49ccaf0" (UID: "1cf12ac0-ea01-4425-9bd0-6900d49ccaf0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:50:32.845654 kubelet[2668]: I0908 23:50:32.845601 2668 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845654 kubelet[2668]: I0908 23:50:32.845635 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845654 kubelet[2668]: I0908 23:50:32.845646 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845654 kubelet[2668]: I0908 23:50:32.845682 2668 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845704 2668 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12a4c1c2-203e-4570-9f2f-7b50858e1461-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845711 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845719 2668 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845726 2668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7chzg\" (UniqueName: \"kubernetes.io/projected/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-kube-api-access-7chzg\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845734 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845742 2668 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845750 2668 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.845852 kubelet[2668]: I0908 23:50:32.845759 2668 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.846056 kubelet[2668]: I0908 23:50:32.845766 2668 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.846056 kubelet[2668]: I0908 23:50:32.845773 2668 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.846056 kubelet[2668]: I0908 23:50:32.845781 2668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6p7k\" (UniqueName: \"kubernetes.io/projected/12a4c1c2-203e-4570-9f2f-7b50858e1461-kube-api-access-m6p7k\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.846056 kubelet[2668]: I0908 23:50:32.845789 2668 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a4c1c2-203e-4570-9f2f-7b50858e1461-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:50:32.919359 systemd[1]: Removed slice kubepods-besteffort-pod1cf12ac0_ea01_4425_9bd0_6900d49ccaf0.slice - libcontainer container kubepods-besteffort-pod1cf12ac0_ea01_4425_9bd0_6900d49ccaf0.slice. Sep 8 23:50:32.922800 kubelet[2668]: I0908 23:50:32.922767 2668 scope.go:117] "RemoveContainer" containerID="b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311" Sep 8 23:50:32.926279 containerd[1537]: time="2025-09-08T23:50:32.925807415Z" level=info msg="RemoveContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\"" Sep 8 23:50:32.929736 systemd[1]: Removed slice kubepods-burstable-pod12a4c1c2_203e_4570_9f2f_7b50858e1461.slice - libcontainer container kubepods-burstable-pod12a4c1c2_203e_4570_9f2f_7b50858e1461.slice. Sep 8 23:50:32.929956 systemd[1]: kubepods-burstable-pod12a4c1c2_203e_4570_9f2f_7b50858e1461.slice: Consumed 6.371s CPU time, 123.1M memory peak, 132K read from disk, 12.9M written to disk. Sep 8 23:50:32.933067 containerd[1537]: time="2025-09-08T23:50:32.933033559Z" level=info msg="RemoveContainer for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" returns successfully" Sep 8 23:50:32.933359 kubelet[2668]: I0908 23:50:32.933297 2668 scope.go:117] "RemoveContainer" containerID="b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311" Sep 8 23:50:32.933888 containerd[1537]: time="2025-09-08T23:50:32.933851597Z" level=error msg="ContainerStatus for \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\": not found" Sep 8 23:50:32.937122 kubelet[2668]: E0908 23:50:32.937085 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\": not found" containerID="b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311" Sep 8 23:50:32.937200 kubelet[2668]: I0908 23:50:32.937133 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311"} err="failed to get container status \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\": rpc error: code = NotFound desc = an error occurred when try to find container \"b193be5ba161dc9361d5de1c4d417662553121e06a8ae2eacb9d1a3f09826311\": not found" Sep 8 23:50:32.937200 kubelet[2668]: I0908 23:50:32.937168 2668 scope.go:117] "RemoveContainer" containerID="da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775" Sep 8 23:50:32.939905 containerd[1537]: time="2025-09-08T23:50:32.939864983Z" level=info msg="RemoveContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\"" Sep 8 23:50:32.946181 containerd[1537]: time="2025-09-08T23:50:32.946148249Z" level=info msg="RemoveContainer for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" returns successfully" Sep 8 23:50:32.946355 kubelet[2668]: I0908 23:50:32.946335 2668 scope.go:117] "RemoveContainer" containerID="fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5" Sep 8 23:50:32.948746 containerd[1537]: time="2025-09-08T23:50:32.948675963Z" level=info msg="RemoveContainer for \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\"" Sep 8 23:50:32.953120 containerd[1537]: time="2025-09-08T23:50:32.953090313Z" level=info msg="RemoveContainer for \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" returns successfully" Sep 8 23:50:32.953383 kubelet[2668]: I0908 23:50:32.953283 2668 scope.go:117] "RemoveContainer" containerID="5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a" Sep 8 23:50:32.955668 containerd[1537]: time="2025-09-08T23:50:32.955637867Z" level=info msg="RemoveContainer for \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\"" Sep 8 23:50:32.959087 containerd[1537]: time="2025-09-08T23:50:32.959018259Z" level=info msg="RemoveContainer for \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" returns successfully" Sep 8 23:50:32.959242 kubelet[2668]: I0908 23:50:32.959178 2668 scope.go:117] "RemoveContainer" containerID="635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378" Sep 8 23:50:32.960733 containerd[1537]: time="2025-09-08T23:50:32.960709095Z" level=info msg="RemoveContainer for \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\"" Sep 8 23:50:32.963804 containerd[1537]: time="2025-09-08T23:50:32.963722248Z" level=info msg="RemoveContainer for \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" returns successfully" Sep 8 23:50:32.963959 kubelet[2668]: I0908 23:50:32.963865 2668 scope.go:117] "RemoveContainer" containerID="f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0" Sep 8 23:50:32.965323 containerd[1537]: time="2025-09-08T23:50:32.965294444Z" level=info msg="RemoveContainer for \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\"" Sep 8 23:50:32.975017 containerd[1537]: time="2025-09-08T23:50:32.974980542Z" level=info msg="RemoveContainer for \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" returns successfully" Sep 8 23:50:32.975164 kubelet[2668]: I0908 23:50:32.975143 2668 scope.go:117] "RemoveContainer" containerID="da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775" Sep 8 23:50:32.975423 containerd[1537]: time="2025-09-08T23:50:32.975381101Z" level=error msg="ContainerStatus for \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\": not found" Sep 8 23:50:32.975549 kubelet[2668]: E0908 23:50:32.975530 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\": not found" containerID="da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775" Sep 8 23:50:32.975593 kubelet[2668]: I0908 23:50:32.975555 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775"} err="failed to get container status \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\": rpc error: code = NotFound desc = an error occurred when try to find container \"da73abce07a6eeb658956f42819ceca0bcd291678c491d6fa63f04927358a775\": not found" Sep 8 23:50:32.975593 kubelet[2668]: I0908 23:50:32.975574 2668 scope.go:117] "RemoveContainer" containerID="fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5" Sep 8 23:50:32.975802 containerd[1537]: time="2025-09-08T23:50:32.975713260Z" level=error msg="ContainerStatus for \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\": not found" Sep 8 23:50:32.976029 kubelet[2668]: E0908 23:50:32.975976 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\": not found" containerID="fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5" Sep 8 23:50:32.976092 kubelet[2668]: I0908 23:50:32.976003 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5"} err="failed to get container status \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd6e9d551248f32f1ce3a421c27cb3c0ba8189f87ec7d98867846881c989ded5\": not found" Sep 8 23:50:32.976092 kubelet[2668]: I0908 23:50:32.976117 2668 scope.go:117] "RemoveContainer" containerID="5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a" Sep 8 23:50:32.976585 containerd[1537]: time="2025-09-08T23:50:32.976516578Z" level=error msg="ContainerStatus for \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\": not found" Sep 8 23:50:32.976768 kubelet[2668]: E0908 23:50:32.976725 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\": not found" containerID="5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a" Sep 8 23:50:32.976855 kubelet[2668]: I0908 23:50:32.976749 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a"} err="failed to get container status \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ccdfaadba605d29e1ce8ad6847041d122e471e2a83514ef498ad2eb9f97c30a\": not found" Sep 8 23:50:32.976985 kubelet[2668]: I0908 23:50:32.976904 2668 scope.go:117] "RemoveContainer" containerID="635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378" Sep 8 23:50:32.977096 containerd[1537]: time="2025-09-08T23:50:32.977063577Z" level=error msg="ContainerStatus for \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\": not found" Sep 8 23:50:32.977202 kubelet[2668]: E0908 23:50:32.977181 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\": not found" containerID="635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378" Sep 8 23:50:32.977248 kubelet[2668]: I0908 23:50:32.977203 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378"} err="failed to get container status \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\": rpc error: code = NotFound desc = an error occurred when try to find container \"635252e274728859729076a7b7071c3a0d2c71e1a83ad90b6423bfa64c96b378\": not found" Sep 8 23:50:32.977248 kubelet[2668]: I0908 23:50:32.977217 2668 scope.go:117] "RemoveContainer" containerID="f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0" Sep 8 23:50:32.977380 containerd[1537]: time="2025-09-08T23:50:32.977356417Z" level=error msg="ContainerStatus for \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\": not found" Sep 8 23:50:32.977556 kubelet[2668]: E0908 23:50:32.977440 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\": not found" containerID="f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0" Sep 8 23:50:32.977556 kubelet[2668]: I0908 23:50:32.977482 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0"} err="failed to get container status \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9bdfd316904f0484565a9ba188c3715e7319c86d7ce19493ea1f0e8a0c1b0c0\": not found" Sep 8 23:50:33.540010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc58f0ab92ce14638652305486744b1b7a05e1eac0a4ee835d3c6e4eabefb125-shm.mount: Deactivated successfully. Sep 8 23:50:33.540413 systemd[1]: var-lib-kubelet-pods-1cf12ac0\x2dea01\x2d4425\x2d9bd0\x2d6900d49ccaf0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7chzg.mount: Deactivated successfully. Sep 8 23:50:33.540602 systemd[1]: var-lib-kubelet-pods-12a4c1c2\x2d203e\x2d4570\x2d9f2f\x2d7b50858e1461-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6p7k.mount: Deactivated successfully. Sep 8 23:50:33.540726 systemd[1]: var-lib-kubelet-pods-12a4c1c2\x2d203e\x2d4570\x2d9f2f\x2d7b50858e1461-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:50:33.540848 systemd[1]: var-lib-kubelet-pods-12a4c1c2\x2d203e\x2d4570\x2d9f2f\x2d7b50858e1461-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:50:33.712384 kubelet[2668]: I0908 23:50:33.711692 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a4c1c2-203e-4570-9f2f-7b50858e1461" path="/var/lib/kubelet/pods/12a4c1c2-203e-4570-9f2f-7b50858e1461/volumes" Sep 8 23:50:33.712384 kubelet[2668]: I0908 23:50:33.712179 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf12ac0-ea01-4425-9bd0-6900d49ccaf0" path="/var/lib/kubelet/pods/1cf12ac0-ea01-4425-9bd0-6900d49ccaf0/volumes" Sep 8 23:50:33.761293 kubelet[2668]: E0908 23:50:33.761203 2668 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:50:34.454641 sshd[4281]: Connection closed by 10.0.0.1 port 36624 Sep 8 23:50:34.455110 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:34.465876 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:36624.service: Deactivated successfully. Sep 8 23:50:34.467626 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:50:34.467805 systemd[1]: session-22.scope: Consumed 1.018s CPU time, 23.7M memory peak. Sep 8 23:50:34.469071 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:50:34.470486 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:36640.service - OpenSSH per-connection server daemon (10.0.0.1:36640). Sep 8 23:50:34.471498 systemd-logind[1517]: Removed session 22. Sep 8 23:50:34.525782 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 36640 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:34.527073 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:34.531879 systemd-logind[1517]: New session 23 of user core. Sep 8 23:50:34.541642 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:50:34.879253 kubelet[2668]: I0908 23:50:34.879155 2668 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T23:50:34Z","lastTransitionTime":"2025-09-08T23:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 8 23:50:35.708998 kubelet[2668]: E0908 23:50:35.708955 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:36.047028 sshd[4434]: Connection closed by 10.0.0.1 port 36640 Sep 8 23:50:36.048320 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:36.061621 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:36640.service: Deactivated successfully. Sep 8 23:50:36.064407 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:50:36.066351 systemd[1]: session-23.scope: Consumed 1.412s CPU time, 26.4M memory peak. Sep 8 23:50:36.068823 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:50:36.073762 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:36654.service - OpenSSH per-connection server daemon (10.0.0.1:36654). Sep 8 23:50:36.076984 systemd-logind[1517]: Removed session 23. Sep 8 23:50:36.091627 systemd[1]: Created slice kubepods-burstable-pod3a005de7_faee_4c4a_91bb_b835ae2a863e.slice - libcontainer container kubepods-burstable-pod3a005de7_faee_4c4a_91bb_b835ae2a863e.slice. Sep 8 23:50:36.144170 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 36654 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:36.147403 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:36.151408 systemd-logind[1517]: New session 24 of user core. Sep 8 23:50:36.166651 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:50:36.167091 kubelet[2668]: I0908 23:50:36.167049 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-hostproc\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167091 kubelet[2668]: I0908 23:50:36.167087 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-cni-path\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167106 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-lib-modules\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167123 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-cilium-run\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167140 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-bpf-maps\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167155 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-host-proc-sys-kernel\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167170 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a005de7-faee-4c4a-91bb-b835ae2a863e-hubble-tls\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167374 kubelet[2668]: I0908 23:50:36.167187 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a005de7-faee-4c4a-91bb-b835ae2a863e-cilium-config-path\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167202 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a005de7-faee-4c4a-91bb-b835ae2a863e-cilium-ipsec-secrets\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167227 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-host-proc-sys-net\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167245 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-etc-cni-netd\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167260 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-cilium-cgroup\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167281 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a005de7-faee-4c4a-91bb-b835ae2a863e-xtables-lock\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167537 kubelet[2668]: I0908 23:50:36.167296 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a005de7-faee-4c4a-91bb-b835ae2a863e-clustermesh-secrets\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.167653 kubelet[2668]: I0908 23:50:36.167312 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnts\" (UniqueName: \"kubernetes.io/projected/3a005de7-faee-4c4a-91bb-b835ae2a863e-kube-api-access-bjnts\") pod \"cilium-6lhhv\" (UID: \"3a005de7-faee-4c4a-91bb-b835ae2a863e\") " pod="kube-system/cilium-6lhhv" Sep 8 23:50:36.218365 sshd[4449]: Connection closed by 10.0.0.1 port 36654 Sep 8 23:50:36.219716 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:36.225760 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:36654.service: Deactivated successfully. Sep 8 23:50:36.228686 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:50:36.229616 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:50:36.232657 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:36668.service - OpenSSH per-connection server daemon (10.0.0.1:36668). Sep 8 23:50:36.233151 systemd-logind[1517]: Removed session 24. Sep 8 23:50:36.306020 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 36668 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:50:36.307492 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:50:36.311586 systemd-logind[1517]: New session 25 of user core. Sep 8 23:50:36.321684 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:50:36.397114 kubelet[2668]: E0908 23:50:36.397066 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:36.397708 containerd[1537]: time="2025-09-08T23:50:36.397663877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lhhv,Uid:3a005de7-faee-4c4a-91bb-b835ae2a863e,Namespace:kube-system,Attempt:0,}" Sep 8 23:50:36.419146 containerd[1537]: time="2025-09-08T23:50:36.419095592Z" level=info msg="connecting to shim 2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:50:36.448695 systemd[1]: Started cri-containerd-2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9.scope - libcontainer container 2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9. Sep 8 23:50:36.471903 containerd[1537]: time="2025-09-08T23:50:36.471847921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6lhhv,Uid:3a005de7-faee-4c4a-91bb-b835ae2a863e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\"" Sep 8 23:50:36.473025 kubelet[2668]: E0908 23:50:36.472557 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:36.481120 containerd[1537]: time="2025-09-08T23:50:36.480989462Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:50:36.486956 containerd[1537]: time="2025-09-08T23:50:36.486904369Z" level=info msg="Container 900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:50:36.493852 containerd[1537]: time="2025-09-08T23:50:36.493798315Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\"" Sep 8 23:50:36.494364 containerd[1537]: time="2025-09-08T23:50:36.494337274Z" level=info msg="StartContainer for \"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\"" Sep 8 23:50:36.495768 containerd[1537]: time="2025-09-08T23:50:36.495522031Z" level=info msg="connecting to shim 900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" protocol=ttrpc version=3 Sep 8 23:50:36.524757 systemd[1]: Started cri-containerd-900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b.scope - libcontainer container 900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b. Sep 8 23:50:36.549439 containerd[1537]: time="2025-09-08T23:50:36.549399478Z" level=info msg="StartContainer for \"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\" returns successfully" Sep 8 23:50:36.556923 systemd[1]: cri-containerd-900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b.scope: Deactivated successfully. Sep 8 23:50:36.560158 containerd[1537]: time="2025-09-08T23:50:36.560115655Z" level=info msg="received exit event container_id:\"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\" id:\"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\" pid:4529 exited_at:{seconds:1757375436 nanos:559740536}" Sep 8 23:50:36.560894 containerd[1537]: time="2025-09-08T23:50:36.560543254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\" id:\"900fb73c289cc028cf2b44db4773695acab50f19aef97967aa2620b4817c3d2b\" pid:4529 exited_at:{seconds:1757375436 nanos:559740536}" Sep 8 23:50:36.937136 kubelet[2668]: E0908 23:50:36.937034 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:36.949565 containerd[1537]: time="2025-09-08T23:50:36.949506596Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:50:36.970531 containerd[1537]: time="2025-09-08T23:50:36.970219992Z" level=info msg="Container f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:50:36.976305 containerd[1537]: time="2025-09-08T23:50:36.976226100Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\"" Sep 8 23:50:36.979806 containerd[1537]: time="2025-09-08T23:50:36.979763652Z" level=info msg="StartContainer for \"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\"" Sep 8 23:50:36.981371 containerd[1537]: time="2025-09-08T23:50:36.981183449Z" level=info msg="connecting to shim f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" protocol=ttrpc version=3 Sep 8 23:50:37.008693 systemd[1]: Started cri-containerd-f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a.scope - libcontainer container f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a. Sep 8 23:50:37.034880 containerd[1537]: time="2025-09-08T23:50:37.034809978Z" level=info msg="StartContainer for \"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\" returns successfully" Sep 8 23:50:37.042557 systemd[1]: cri-containerd-f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a.scope: Deactivated successfully. Sep 8 23:50:37.042946 containerd[1537]: time="2025-09-08T23:50:37.042896881Z" level=info msg="received exit event container_id:\"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\" id:\"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\" pid:4573 exited_at:{seconds:1757375437 nanos:42666322}" Sep 8 23:50:37.043007 containerd[1537]: time="2025-09-08T23:50:37.042981081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\" id:\"f6a2d24781a0890e8806fc87aea626c52e75c40973a1e943d15a4c5c3bde266a\" pid:4573 exited_at:{seconds:1757375437 nanos:42666322}" Sep 8 23:50:37.943203 kubelet[2668]: E0908 23:50:37.943155 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:37.952691 containerd[1537]: time="2025-09-08T23:50:37.952650049Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:50:37.969907 containerd[1537]: time="2025-09-08T23:50:37.969849574Z" level=info msg="Container 66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:50:37.984516 containerd[1537]: time="2025-09-08T23:50:37.984452864Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\"" Sep 8 23:50:37.985041 containerd[1537]: time="2025-09-08T23:50:37.985023022Z" level=info msg="StartContainer for \"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\"" Sep 8 23:50:37.986772 containerd[1537]: time="2025-09-08T23:50:37.986706099Z" level=info msg="connecting to shim 66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" protocol=ttrpc version=3 Sep 8 23:50:38.013714 systemd[1]: Started cri-containerd-66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655.scope - libcontainer container 66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655. Sep 8 23:50:38.051732 systemd[1]: cri-containerd-66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655.scope: Deactivated successfully. Sep 8 23:50:38.053459 containerd[1537]: time="2025-09-08T23:50:38.053111445Z" level=info msg="received exit event container_id:\"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\" id:\"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\" pid:4617 exited_at:{seconds:1757375438 nanos:52944605}" Sep 8 23:50:38.053459 containerd[1537]: time="2025-09-08T23:50:38.053338124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\" id:\"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\" pid:4617 exited_at:{seconds:1757375438 nanos:52944605}" Sep 8 23:50:38.054114 containerd[1537]: time="2025-09-08T23:50:38.054049163Z" level=info msg="StartContainer for \"66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655\" returns successfully" Sep 8 23:50:38.077774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66562ba4571ba74beea52ac313996ab3c0f0f824243e96a5cf9838ca8b339655-rootfs.mount: Deactivated successfully. Sep 8 23:50:38.762360 kubelet[2668]: E0908 23:50:38.762290 2668 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:50:38.948799 kubelet[2668]: E0908 23:50:38.948747 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:38.955133 containerd[1537]: time="2025-09-08T23:50:38.955067550Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:50:38.968262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374163974.mount: Deactivated successfully. Sep 8 23:50:38.972686 containerd[1537]: time="2025-09-08T23:50:38.972639514Z" level=info msg="Container 0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:50:38.973976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612877239.mount: Deactivated successfully. Sep 8 23:50:38.979773 containerd[1537]: time="2025-09-08T23:50:38.979722820Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\"" Sep 8 23:50:38.980252 containerd[1537]: time="2025-09-08T23:50:38.980219979Z" level=info msg="StartContainer for \"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\"" Sep 8 23:50:38.981547 containerd[1537]: time="2025-09-08T23:50:38.981505136Z" level=info msg="connecting to shim 0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" protocol=ttrpc version=3 Sep 8 23:50:39.002677 systemd[1]: Started cri-containerd-0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9.scope - libcontainer container 0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9. Sep 8 23:50:39.032481 systemd[1]: cri-containerd-0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9.scope: Deactivated successfully. Sep 8 23:50:39.033807 containerd[1537]: time="2025-09-08T23:50:39.033031074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\" id:\"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\" pid:4656 exited_at:{seconds:1757375439 nanos:32655995}" Sep 8 23:50:39.034536 containerd[1537]: time="2025-09-08T23:50:39.033581553Z" level=info msg="received exit event container_id:\"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\" id:\"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\" pid:4656 exited_at:{seconds:1757375439 nanos:32655995}" Sep 8 23:50:39.035572 containerd[1537]: time="2025-09-08T23:50:39.035460429Z" level=info msg="StartContainer for \"0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9\" returns successfully" Sep 8 23:50:39.954695 kubelet[2668]: E0908 23:50:39.954609 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:39.964022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ad50872288d3b6c62e3efe6ea39ef5b3de64ca23c4bc089ef5f7e4d685833f9-rootfs.mount: Deactivated successfully. Sep 8 23:50:39.966770 containerd[1537]: time="2025-09-08T23:50:39.966730676Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:50:39.981192 containerd[1537]: time="2025-09-08T23:50:39.981152608Z" level=info msg="Container ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:50:39.992174 containerd[1537]: time="2025-09-08T23:50:39.992133506Z" level=info msg="CreateContainer within sandbox \"2e2698f0bc27a46667d9f4f138f5784d7bef061eade7053678f46a4c386a7cc9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\"" Sep 8 23:50:39.992691 containerd[1537]: time="2025-09-08T23:50:39.992665425Z" level=info msg="StartContainer for \"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\"" Sep 8 23:50:39.994089 containerd[1537]: time="2025-09-08T23:50:39.994053422Z" level=info msg="connecting to shim ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176" address="unix:///run/containerd/s/c4708086a25e0880a7a9399306266f11b7ed6691b471602f6ecff8a67fd3a572" protocol=ttrpc version=3 Sep 8 23:50:40.017674 systemd[1]: Started cri-containerd-ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176.scope - libcontainer container ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176. Sep 8 23:50:40.051608 containerd[1537]: time="2025-09-08T23:50:40.051562151Z" level=info msg="StartContainer for \"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" returns successfully" Sep 8 23:50:40.115744 containerd[1537]: time="2025-09-08T23:50:40.115688108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" id:\"9e080e23e8edad9269041cb5c59c00fc3f18004ac203cb815a8d67667229013a\" pid:4725 exited_at:{seconds:1757375440 nanos:115136549}" Sep 8 23:50:40.319518 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 8 23:50:40.961684 kubelet[2668]: E0908 23:50:40.961642 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:40.977972 kubelet[2668]: I0908 23:50:40.977909 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6lhhv" podStartSLOduration=4.977890367 podStartE2EDuration="4.977890367s" podCreationTimestamp="2025-09-08 23:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:50:40.977746968 +0000 UTC m=+77.374582424" watchObservedRunningTime="2025-09-08 23:50:40.977890367 +0000 UTC m=+77.374725863" Sep 8 23:50:41.963782 kubelet[2668]: E0908 23:50:41.963693 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:42.679119 containerd[1537]: time="2025-09-08T23:50:42.679042827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" id:\"876b855c5a069ca8bd1e2ec84503f5529cdab85ebf9d85594b2e307a737084f8\" pid:5080 exit_status:1 exited_at:{seconds:1757375442 nanos:678497948}" Sep 8 23:50:42.965492 kubelet[2668]: E0908 23:50:42.965365 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:43.274797 systemd-networkd[1452]: lxc_health: Link UP Sep 8 23:50:43.275098 systemd-networkd[1452]: lxc_health: Gained carrier Sep 8 23:50:44.399299 kubelet[2668]: E0908 23:50:44.399241 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:44.809818 containerd[1537]: time="2025-09-08T23:50:44.809771797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" id:\"8dc21a211beecfc47b43a134f3b971667c4f2ff731f2a78faadbea3644e57037\" pid:5260 exited_at:{seconds:1757375444 nanos:809021118}" Sep 8 23:50:44.969370 kubelet[2668]: E0908 23:50:44.969134 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:45.172630 systemd-networkd[1452]: lxc_health: Gained IPv6LL Sep 8 23:50:46.709282 kubelet[2668]: E0908 23:50:46.709242 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:46.927911 containerd[1537]: time="2025-09-08T23:50:46.927872954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" id:\"ca5bece2b3d8d2556d3ef761a6e0c134d2dfb190aa73e628787745a0d61c896d\" pid:5290 exited_at:{seconds:1757375446 nanos:927590474}" Sep 8 23:50:48.709030 kubelet[2668]: E0908 23:50:48.708987 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:50:49.163692 containerd[1537]: time="2025-09-08T23:50:49.163651797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb7cc172aed48a587c7b4c110295d86510570575338589cad1f33c81f5fe176\" id:\"f7fbaef606c63b9c8efadd8165e24a8de2b2f00edb4e360c728b46803c6f8361\" pid:5321 exited_at:{seconds:1757375449 nanos:163047158}" Sep 8 23:50:49.168160 sshd[4463]: Connection closed by 10.0.0.1 port 36668 Sep 8 23:50:49.168856 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Sep 8 23:50:49.172001 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:50:49.173552 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:36668.service: Deactivated successfully. Sep 8 23:50:49.175097 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:50:49.177613 systemd-logind[1517]: Removed session 25.