Sep 10 23:28:30.754006 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:28:30.754027 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:08:24 -00 2025 Sep 10 23:28:30.754036 kernel: KASLR enabled Sep 10 23:28:30.754042 kernel: efi: EFI v2.7 by EDK II Sep 10 23:28:30.754047 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:28:30.754053 kernel: random: crng init done Sep 10 23:28:30.754059 kernel: secureboot: Secure boot disabled Sep 10 23:28:30.754065 kernel: ACPI: Early table checksum verification disabled Sep 10 23:28:30.754071 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:28:30.754077 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:28:30.754083 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754089 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754094 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754100 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754107 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754114 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754120 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754126 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754132 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:28:30.754138 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:28:30.754144 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:28:30.754150 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:28:30.754156 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:28:30.754162 kernel: Zone ranges: Sep 10 23:28:30.754167 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:28:30.754174 kernel: DMA32 empty Sep 10 23:28:30.754180 kernel: Normal empty Sep 10 23:28:30.754186 kernel: Device empty Sep 10 23:28:30.754192 kernel: Movable zone start for each node Sep 10 23:28:30.754198 kernel: Early memory node ranges Sep 10 23:28:30.754203 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:28:30.754209 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:28:30.754215 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:28:30.754221 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:28:30.754227 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:28:30.754233 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:28:30.754239 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:28:30.754246 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:28:30.754252 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:28:30.754258 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:28:30.754266 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:28:30.754273 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:28:30.754279 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:28:30.754286 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:28:30.754293 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:28:30.754300 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:28:30.754306 kernel: psci: probing for conduit method from ACPI. Sep 10 23:28:30.754312 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:28:30.754319 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:28:30.754325 kernel: psci: Trusted OS migration not required Sep 10 23:28:30.754331 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:28:30.754338 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:28:30.754344 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:28:30.754352 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:28:30.754359 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:28:30.754365 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:28:30.754371 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:28:30.754377 kernel: CPU features: detected: Spectre-v4 Sep 10 23:28:30.754384 kernel: CPU features: detected: Spectre-BHB Sep 10 23:28:30.754390 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:28:30.754396 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:28:30.754402 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:28:30.754409 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:28:30.754415 kernel: alternatives: applying boot alternatives Sep 10 23:28:30.754422 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:28:30.754454 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:28:30.754461 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:28:30.754467 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:28:30.754474 kernel: Fallback order for Node 0: 0 Sep 10 23:28:30.754481 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:28:30.754487 kernel: Policy zone: DMA Sep 10 23:28:30.754493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:28:30.754500 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:28:30.754508 kernel: software IO TLB: area num 4. Sep 10 23:28:30.754514 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:28:30.754520 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:28:30.754529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:28:30.754535 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:28:30.754542 kernel: rcu: RCU event tracing is enabled. Sep 10 23:28:30.754548 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:28:30.754555 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:28:30.754562 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:28:30.754568 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:28:30.754575 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:28:30.754581 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:28:30.754588 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:28:30.754594 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:28:30.754601 kernel: GICv3: 256 SPIs implemented Sep 10 23:28:30.754608 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:28:30.754614 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:28:30.754620 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:28:30.754626 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:28:30.754633 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:28:30.754639 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:28:30.754645 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:28:30.754652 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:28:30.754658 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:28:30.754665 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:28:30.754671 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:28:30.754679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:28:30.754685 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:28:30.754691 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:28:30.754698 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:28:30.754704 kernel: arm-pv: using stolen time PV Sep 10 23:28:30.754718 kernel: Console: colour dummy device 80x25 Sep 10 23:28:30.754725 kernel: ACPI: Core revision 20240827 Sep 10 23:28:30.754732 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:28:30.754739 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:28:30.754745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:28:30.754753 kernel: landlock: Up and running. Sep 10 23:28:30.754760 kernel: SELinux: Initializing. Sep 10 23:28:30.754766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:28:30.754773 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:28:30.754780 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:28:30.754786 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:28:30.754793 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:28:30.754799 kernel: Remapping and enabling EFI services. Sep 10 23:28:30.754806 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:28:30.754818 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:28:30.754825 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:28:30.754832 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:28:30.754840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:28:30.754847 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:28:30.754854 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:28:30.754861 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:28:30.754868 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:28:30.754876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:28:30.754883 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:28:30.754890 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:28:30.754896 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:28:30.754903 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:28:30.754910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:28:30.754916 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:28:30.754923 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:28:30.754930 kernel: SMP: Total of 4 processors activated. Sep 10 23:28:30.754938 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:28:30.754945 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:28:30.754952 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:28:30.754959 kernel: CPU features: detected: Common not Private translations Sep 10 23:28:30.754965 kernel: CPU features: detected: CRC32 instructions Sep 10 23:28:30.754972 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:28:30.754979 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:28:30.754986 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:28:30.754992 kernel: CPU features: detected: Privileged Access Never Sep 10 23:28:30.755001 kernel: CPU features: detected: RAS Extension Support Sep 10 23:28:30.755007 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:28:30.755014 kernel: alternatives: applying system-wide alternatives Sep 10 23:28:30.755021 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:28:30.755028 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9064K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 10 23:28:30.755036 kernel: devtmpfs: initialized Sep 10 23:28:30.755042 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:28:30.755049 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:28:30.755056 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:28:30.755064 kernel: 0 pages in range for non-PLT usage Sep 10 23:28:30.755071 kernel: 508576 pages in range for PLT usage Sep 10 23:28:30.755077 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:28:30.755084 kernel: SMBIOS 3.0.0 present. Sep 10 23:28:30.755091 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:28:30.755097 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:28:30.755104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:28:30.755111 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:28:30.755118 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:28:30.755126 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:28:30.755133 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:28:30.755140 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 23:28:30.755147 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:28:30.755154 kernel: cpuidle: using governor menu Sep 10 23:28:30.755161 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:28:30.755167 kernel: ASID allocator initialised with 32768 entries Sep 10 23:28:30.755174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:28:30.755181 kernel: Serial: AMBA PL011 UART driver Sep 10 23:28:30.755189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:28:30.755196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:28:30.755203 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:28:30.755210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:28:30.755216 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:28:30.755223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:28:30.755230 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:28:30.755237 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:28:30.755244 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:28:30.755252 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:28:30.755259 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:28:30.755265 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:28:30.755272 kernel: ACPI: Interpreter enabled Sep 10 23:28:30.755279 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:28:30.755286 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:28:30.755292 kernel: ACPI: CPU0 has been hot-added Sep 10 23:28:30.755299 kernel: ACPI: CPU1 has been hot-added Sep 10 23:28:30.755306 kernel: ACPI: CPU2 has been hot-added Sep 10 23:28:30.755312 kernel: ACPI: CPU3 has been hot-added Sep 10 23:28:30.755320 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:28:30.755327 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:28:30.755334 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:28:30.755493 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:28:30.755568 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:28:30.755630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:28:30.755690 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:28:30.755766 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:28:30.755777 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:28:30.755784 kernel: PCI host bridge to bus 0000:00 Sep 10 23:28:30.755849 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:28:30.755903 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:28:30.755955 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:28:30.756007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:28:30.756083 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:28:30.756153 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:28:30.756228 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:28:30.756289 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:28:30.756347 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:28:30.756407 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:28:30.756485 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:28:30.756548 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:28:30.756602 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:28:30.756655 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:28:30.756707 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:28:30.756722 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:28:30.756729 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:28:30.756736 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:28:30.756746 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:28:30.756753 kernel: iommu: Default domain type: Translated Sep 10 23:28:30.756760 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:28:30.756766 kernel: efivars: Registered efivars operations Sep 10 23:28:30.756773 kernel: vgaarb: loaded Sep 10 23:28:30.756780 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:28:30.756787 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:28:30.756794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:28:30.756801 kernel: pnp: PnP ACPI init Sep 10 23:28:30.756873 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:28:30.756883 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:28:30.756891 kernel: NET: Registered PF_INET protocol family Sep 10 23:28:30.756897 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:28:30.756905 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:28:30.756912 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:28:30.756918 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:28:30.756925 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:28:30.756934 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:28:30.756941 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:28:30.756948 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:28:30.756955 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:28:30.756962 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:28:30.756969 kernel: kvm [1]: HYP mode not available Sep 10 23:28:30.756975 kernel: Initialise system trusted keyrings Sep 10 23:28:30.756982 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:28:30.756989 kernel: Key type asymmetric registered Sep 10 23:28:30.756997 kernel: Asymmetric key parser 'x509' registered Sep 10 23:28:30.757004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:28:30.757011 kernel: io scheduler mq-deadline registered Sep 10 23:28:30.757018 kernel: io scheduler kyber registered Sep 10 23:28:30.757025 kernel: io scheduler bfq registered Sep 10 23:28:30.757032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:28:30.757039 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:28:30.757046 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:28:30.757106 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:28:30.757116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:28:30.757123 kernel: thunder_xcv, ver 1.0 Sep 10 23:28:30.757130 kernel: thunder_bgx, ver 1.0 Sep 10 23:28:30.757137 kernel: nicpf, ver 1.0 Sep 10 23:28:30.757143 kernel: nicvf, ver 1.0 Sep 10 23:28:30.757218 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:28:30.757275 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:28:30 UTC (1757546910) Sep 10 23:28:30.757284 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:28:30.757291 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:28:30.757300 kernel: watchdog: NMI not fully supported Sep 10 23:28:30.757306 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:28:30.757313 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:28:30.757320 kernel: Segment Routing with IPv6 Sep 10 23:28:30.757327 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:28:30.757334 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:28:30.757340 kernel: Key type dns_resolver registered Sep 10 23:28:30.757347 kernel: registered taskstats version 1 Sep 10 23:28:30.757354 kernel: Loading compiled-in X.509 certificates Sep 10 23:28:30.757362 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 614348c8450ce34f552a2f872e2a442c01d91c4b' Sep 10 23:28:30.757369 kernel: Demotion targets for Node 0: null Sep 10 23:28:30.757376 kernel: Key type .fscrypt registered Sep 10 23:28:30.757383 kernel: Key type fscrypt-provisioning registered Sep 10 23:28:30.757390 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:28:30.757396 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:28:30.757403 kernel: ima: No architecture policies found Sep 10 23:28:30.757410 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:28:30.757418 kernel: clk: Disabling unused clocks Sep 10 23:28:30.757435 kernel: PM: genpd: Disabling unused power domains Sep 10 23:28:30.757443 kernel: Warning: unable to open an initial console. Sep 10 23:28:30.757450 kernel: Freeing unused kernel memory: 38912K Sep 10 23:28:30.757456 kernel: Run /init as init process Sep 10 23:28:30.757463 kernel: with arguments: Sep 10 23:28:30.757470 kernel: /init Sep 10 23:28:30.757476 kernel: with environment: Sep 10 23:28:30.757483 kernel: HOME=/ Sep 10 23:28:30.757490 kernel: TERM=linux Sep 10 23:28:30.757499 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:28:30.757507 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:28:30.757516 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:28:30.757524 systemd[1]: Detected virtualization kvm. Sep 10 23:28:30.757532 systemd[1]: Detected architecture arm64. Sep 10 23:28:30.757539 systemd[1]: Running in initrd. Sep 10 23:28:30.757546 systemd[1]: No hostname configured, using default hostname. Sep 10 23:28:30.757555 systemd[1]: Hostname set to . Sep 10 23:28:30.757562 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:28:30.757569 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:28:30.757577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:28:30.757584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:28:30.757593 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:28:30.757600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:28:30.757608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:28:30.757617 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:28:30.757625 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:28:30.757633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:28:30.757641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:28:30.757648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:28:30.757656 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:28:30.757663 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:28:30.757672 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:28:30.757679 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:28:30.757686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:28:30.757694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:28:30.757701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:28:30.757713 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:28:30.757722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:28:30.757729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:28:30.757738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:28:30.757746 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:28:30.757753 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:28:30.757761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:28:30.757768 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:28:30.757776 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:28:30.757783 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:28:30.757791 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:28:30.757799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:28:30.757807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:28:30.757815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:28:30.757823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:28:30.757830 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:28:30.757858 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 23:28:30.757877 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:28:30.757885 systemd-journald[244]: Journal started Sep 10 23:28:30.757904 systemd-journald[244]: Runtime Journal (/run/log/journal/056f870e86534d0883ec212cec7dfba8) is 6M, max 48.5M, 42.4M free. Sep 10 23:28:30.764502 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:28:30.750374 systemd-modules-load[245]: Inserted module 'overlay' Sep 10 23:28:30.766562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:28:30.766581 kernel: Bridge firewalling registered Sep 10 23:28:30.766491 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 10 23:28:30.770058 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:28:30.771486 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:28:30.772523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:28:30.776497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:28:30.778024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:28:30.779706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:28:30.787364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:28:30.795328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:28:30.796476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:28:30.799249 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:28:30.802024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:28:30.803287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:28:30.805746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:28:30.807676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:28:30.838044 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:28:30.852109 systemd-resolved[289]: Positive Trust Anchors: Sep 10 23:28:30.852128 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:28:30.852160 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:28:30.857320 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 10 23:28:30.858396 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:28:30.860388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:28:30.914457 kernel: SCSI subsystem initialized Sep 10 23:28:30.918441 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:28:30.926463 kernel: iscsi: registered transport (tcp) Sep 10 23:28:30.938464 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:28:30.938504 kernel: QLogic iSCSI HBA Driver Sep 10 23:28:30.954681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:28:30.978269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:28:30.980257 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:28:31.027516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:28:31.029574 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:28:31.101455 kernel: raid6: neonx8 gen() 15744 MB/s Sep 10 23:28:31.118443 kernel: raid6: neonx4 gen() 15815 MB/s Sep 10 23:28:31.135444 kernel: raid6: neonx2 gen() 13226 MB/s Sep 10 23:28:31.152456 kernel: raid6: neonx1 gen() 10374 MB/s Sep 10 23:28:31.169465 kernel: raid6: int64x8 gen() 6672 MB/s Sep 10 23:28:31.186462 kernel: raid6: int64x4 gen() 7196 MB/s Sep 10 23:28:31.203456 kernel: raid6: int64x2 gen() 6106 MB/s Sep 10 23:28:31.220450 kernel: raid6: int64x1 gen() 4929 MB/s Sep 10 23:28:31.220466 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s Sep 10 23:28:31.237486 kernel: raid6: .... xor() 12265 MB/s, rmw enabled Sep 10 23:28:31.237532 kernel: raid6: using neon recovery algorithm Sep 10 23:28:31.242563 kernel: xor: measuring software checksum speed Sep 10 23:28:31.242601 kernel: 8regs : 21641 MB/sec Sep 10 23:28:31.243617 kernel: 32regs : 20902 MB/sec Sep 10 23:28:31.243629 kernel: arm64_neon : 28138 MB/sec Sep 10 23:28:31.243638 kernel: xor: using function: arm64_neon (28138 MB/sec) Sep 10 23:28:31.296461 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:28:31.302531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:28:31.304730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:28:31.332302 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 23:28:31.336349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:28:31.338110 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:28:31.367288 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 10 23:28:31.391659 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:28:31.393935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:28:31.450480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:28:31.453254 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:28:31.503910 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:28:31.504100 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:28:31.509661 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:28:31.509734 kernel: GPT:9289727 != 19775487 Sep 10 23:28:31.509747 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:28:31.513527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:28:31.517471 kernel: GPT:9289727 != 19775487 Sep 10 23:28:31.517504 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:28:31.517514 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:28:31.513654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:28:31.520131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:28:31.522609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:28:31.547710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:28:31.548871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:28:31.560204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:28:31.561256 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:28:31.563776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:28:31.571538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:28:31.583870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:28:31.584927 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:28:31.586424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:28:31.588124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:28:31.590370 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:28:31.591996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:28:31.614051 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:28:31.663909 disk-uuid[593]: Primary Header is updated. Sep 10 23:28:31.663909 disk-uuid[593]: Secondary Entries is updated. Sep 10 23:28:31.663909 disk-uuid[593]: Secondary Header is updated. Sep 10 23:28:31.668463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:28:32.679280 disk-uuid[601]: The operation has completed successfully. Sep 10 23:28:32.680759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:28:32.704794 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:28:32.705597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:28:32.731274 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:28:32.761462 sh[612]: Success Sep 10 23:28:32.773868 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:28:32.773913 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:28:32.774849 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:28:32.782458 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:28:32.809120 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:28:32.810697 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:28:32.823726 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:28:32.830575 kernel: BTRFS: device fsid 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (624) Sep 10 23:28:32.830609 kernel: BTRFS info (device dm-0): first mount of filesystem 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e Sep 10 23:28:32.830619 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:28:32.835448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:28:32.835496 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:28:32.836206 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:28:32.837300 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:28:32.838332 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:28:32.839095 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:28:32.841774 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:28:32.862355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 10 23:28:32.862394 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:28:32.862404 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:28:32.865498 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:28:32.865529 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:28:32.870464 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:28:32.870805 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:28:32.872714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:28:32.945512 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:28:32.948995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:28:32.978488 ignition[698]: Ignition 2.21.0 Sep 10 23:28:32.978503 ignition[698]: Stage: fetch-offline Sep 10 23:28:32.978541 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:32.978550 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:32.978759 ignition[698]: parsed url from cmdline: "" Sep 10 23:28:32.978763 ignition[698]: no config URL provided Sep 10 23:28:32.978768 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:28:32.978774 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:28:32.978794 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 10 23:28:32.978798 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:28:32.986145 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 10 23:28:32.987321 systemd-networkd[803]: lo: Link UP Sep 10 23:28:32.987333 systemd-networkd[803]: lo: Gained carrier Sep 10 23:28:32.988016 systemd-networkd[803]: Enumeration completed Sep 10 23:28:32.988386 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:28:32.988389 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:28:32.989047 systemd-networkd[803]: eth0: Link UP Sep 10 23:28:32.989416 systemd-networkd[803]: eth0: Gained carrier Sep 10 23:28:32.989443 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:28:32.989571 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:28:32.991297 systemd[1]: Reached target network.target - Network. Sep 10 23:28:33.011473 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:28:33.035413 ignition[698]: parsing config with SHA512: 69934eef97d6e1637c714b2a500a077d513d45c623c083edcebd8755a64efde5b26c11897e40bf9393b5ba17bf58012ecc5e50f09029482f487bc0434dd523a5 Sep 10 23:28:33.040302 unknown[698]: fetched base config from "system" Sep 10 23:28:33.040311 unknown[698]: fetched user config from "qemu" Sep 10 23:28:33.040853 ignition[698]: fetch-offline: fetch-offline passed Sep 10 23:28:33.040913 ignition[698]: Ignition finished successfully Sep 10 23:28:33.042607 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:28:33.043852 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:28:33.045913 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:28:33.080337 ignition[810]: Ignition 2.21.0 Sep 10 23:28:33.080356 ignition[810]: Stage: kargs Sep 10 23:28:33.080510 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:33.080520 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:33.081995 ignition[810]: kargs: kargs passed Sep 10 23:28:33.082049 ignition[810]: Ignition finished successfully Sep 10 23:28:33.085495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:28:33.087235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:28:33.118937 ignition[818]: Ignition 2.21.0 Sep 10 23:28:33.118951 ignition[818]: Stage: disks Sep 10 23:28:33.119073 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:33.119082 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:33.119780 ignition[818]: disks: disks passed Sep 10 23:28:33.119821 ignition[818]: Ignition finished successfully Sep 10 23:28:33.122556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:28:33.124210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:28:33.125902 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:28:33.126853 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:28:33.128376 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:28:33.129980 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:28:33.132068 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:28:33.168496 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:28:33.173755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:28:33.177684 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:28:33.243458 kernel: EXT4-fs (vda9): mounted filesystem e1f6153c-c458-4b1b-a85a-9d30297a863a r/w with ordered data mode. Quota mode: none. Sep 10 23:28:33.243956 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:28:33.245045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:28:33.247002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:28:33.248454 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:28:33.249206 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:28:33.249251 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:28:33.249274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:28:33.262017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:28:33.265014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:28:33.269916 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 10 23:28:33.269946 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:28:33.270666 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:28:33.273456 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:28:33.273494 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:28:33.275217 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:28:33.302240 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:28:33.305160 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:28:33.308164 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:28:33.311991 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:28:33.379538 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:28:33.381277 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:28:33.382880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:28:33.397475 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:28:33.412572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:28:33.424588 ignition[950]: INFO : Ignition 2.21.0 Sep 10 23:28:33.424588 ignition[950]: INFO : Stage: mount Sep 10 23:28:33.426003 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:33.426003 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:33.427512 ignition[950]: INFO : mount: mount passed Sep 10 23:28:33.427512 ignition[950]: INFO : Ignition finished successfully Sep 10 23:28:33.428845 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:28:33.430962 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:28:33.829457 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:28:33.830924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:28:33.859925 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 10 23:28:33.859965 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:28:33.859983 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:28:33.865325 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:28:33.865361 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:28:33.864616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:28:33.891980 ignition[980]: INFO : Ignition 2.21.0 Sep 10 23:28:33.891980 ignition[980]: INFO : Stage: files Sep 10 23:28:33.893367 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:33.893367 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:33.895256 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:28:33.895256 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:28:33.895256 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:28:33.898540 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:28:33.898540 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:28:33.898540 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:28:33.898540 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:28:33.898540 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 23:28:33.896642 unknown[980]: wrote ssh authorized keys file for user: core Sep 10 23:28:33.937680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:28:34.468586 systemd-networkd[803]: eth0: Gained IPv6LL Sep 10 23:28:34.495774 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:28:34.497403 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:28:34.497403 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 23:28:34.699858 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 23:28:34.816767 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:28:34.816767 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:28:34.820451 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:28:34.834751 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:28:34.834751 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:28:34.834751 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 23:28:35.233353 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 23:28:35.815001 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:28:35.815001 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 23:28:35.818624 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:28:35.836621 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:28:35.839820 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:28:35.841057 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:28:35.841057 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:28:35.841057 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:28:35.841057 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:28:35.841057 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:28:35.841057 ignition[980]: INFO : files: files passed Sep 10 23:28:35.841057 ignition[980]: INFO : Ignition finished successfully Sep 10 23:28:35.842545 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:28:35.845773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:28:35.848779 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:28:35.869286 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:28:35.870231 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:28:35.872023 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:28:35.874194 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:28:35.875711 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:28:35.876918 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:28:35.876754 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:28:35.877951 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:28:35.880488 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:28:35.910666 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:28:35.911481 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:28:35.912541 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:28:35.914130 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:28:35.915819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:28:35.916491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:28:35.941048 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:28:35.944638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:28:35.964280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:28:35.966604 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:28:35.968406 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:28:35.969222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:28:35.969329 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:28:35.971497 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:28:35.973050 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:28:35.974348 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:28:35.975688 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:28:35.977380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:28:35.978976 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:28:35.980417 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:28:35.981895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:28:35.983493 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:28:35.985083 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:28:35.986420 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:28:35.987708 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:28:35.987818 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:28:35.989665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:28:35.991155 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:28:35.993319 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:28:35.996498 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:28:35.997452 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:28:35.997567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:28:35.999975 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:28:36.000088 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:28:36.001718 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:28:36.002918 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:28:36.003048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:28:36.004554 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:28:36.005749 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:28:36.007199 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:28:36.007278 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:28:36.009029 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:28:36.009098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:28:36.010499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:28:36.010609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:28:36.011987 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:28:36.012077 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:28:36.014061 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:28:36.015170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:28:36.015287 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:28:36.017512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:28:36.019052 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:28:36.019161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:28:36.022098 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:28:36.022189 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:28:36.027318 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:28:36.033456 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:28:36.042262 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:28:36.049964 ignition[1035]: INFO : Ignition 2.21.0 Sep 10 23:28:36.052521 ignition[1035]: INFO : Stage: umount Sep 10 23:28:36.052521 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:28:36.052521 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:28:36.054743 ignition[1035]: INFO : umount: umount passed Sep 10 23:28:36.054743 ignition[1035]: INFO : Ignition finished successfully Sep 10 23:28:36.055097 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:28:36.055203 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:28:36.056500 systemd[1]: Stopped target network.target - Network. Sep 10 23:28:36.059646 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:28:36.059719 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:28:36.060927 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:28:36.060971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:28:36.062235 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:28:36.062277 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:28:36.063706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:28:36.063745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:28:36.065529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:28:36.066649 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:28:36.074813 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:28:36.074924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:28:36.080282 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:28:36.081629 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:28:36.081681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:28:36.085844 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:28:36.086030 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:28:36.088125 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:28:36.089737 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:28:36.090080 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:28:36.091087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:28:36.091118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:28:36.094558 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:28:36.097647 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:28:36.097725 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:28:36.099437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:28:36.099480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:28:36.101928 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:28:36.101972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:28:36.103500 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:28:36.106962 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:28:36.115854 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:28:36.125590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:28:36.126782 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:28:36.126911 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:28:36.128574 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:28:36.130303 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:28:36.132505 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:28:36.133327 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:28:36.135205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:28:36.136042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:28:36.137505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:28:36.137555 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:28:36.139065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:28:36.139109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:28:36.141248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:28:36.141298 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:28:36.143704 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:28:36.143754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:28:36.146201 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:28:36.147623 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:28:36.147673 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:28:36.150159 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:28:36.150199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:28:36.153036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:28:36.153078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:28:36.158413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:28:36.158568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:28:36.160218 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:28:36.162325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:28:36.192610 systemd[1]: Switching root. Sep 10 23:28:36.223702 systemd-journald[244]: Journal stopped Sep 10 23:28:37.000017 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 23:28:37.000065 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:28:37.000083 kernel: SELinux: policy capability open_perms=1 Sep 10 23:28:37.000093 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:28:37.000103 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:28:37.000116 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:28:37.000125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:28:37.000134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:28:37.000147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:28:37.000156 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:28:37.000166 kernel: audit: type=1403 audit(1757546916.417:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:28:37.000180 systemd[1]: Successfully loaded SELinux policy in 64.817ms. Sep 10 23:28:37.000199 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.324ms. Sep 10 23:28:37.000211 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:28:37.000222 systemd[1]: Detected virtualization kvm. Sep 10 23:28:37.000232 systemd[1]: Detected architecture arm64. Sep 10 23:28:37.000242 systemd[1]: Detected first boot. Sep 10 23:28:37.000252 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:28:37.000261 zram_generator::config[1081]: No configuration found. Sep 10 23:28:37.000272 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:28:37.000281 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:28:37.000293 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:28:37.000304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:28:37.000313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:28:37.000323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:28:37.000333 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:28:37.000343 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:28:37.000356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:28:37.000367 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:28:37.000378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:28:37.000388 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:28:37.000399 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:28:37.000408 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:28:37.000419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:28:37.000442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:28:37.000454 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:28:37.000463 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:28:37.000474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:28:37.000486 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:28:37.000496 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:28:37.000507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:28:37.000516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:28:37.000526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:28:37.000536 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:28:37.000546 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:28:37.000557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:28:37.000568 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:28:37.000578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:28:37.000588 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:28:37.000599 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:28:37.000609 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:28:37.000619 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:28:37.000629 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:28:37.000639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:28:37.000649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:28:37.000661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:28:37.000672 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:28:37.000682 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:28:37.000725 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:28:37.000740 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:28:37.000750 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:28:37.000761 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:28:37.000772 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:28:37.000783 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:28:37.000796 systemd[1]: Reached target machines.target - Containers. Sep 10 23:28:37.000806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:28:37.000816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:28:37.000826 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:28:37.000836 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:28:37.000846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:28:37.000858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:28:37.000868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:28:37.000879 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:28:37.000889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:28:37.000899 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:28:37.000909 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:28:37.000920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:28:37.000930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:28:37.000940 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:28:37.000951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:28:37.000962 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:28:37.000972 kernel: ACPI: bus type drm_connector registered Sep 10 23:28:37.000982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:28:37.000992 kernel: loop: module loaded Sep 10 23:28:37.001001 kernel: fuse: init (API version 7.41) Sep 10 23:28:37.001010 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:28:37.001021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:28:37.001031 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:28:37.001041 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:28:37.001053 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:28:37.001064 systemd[1]: Stopped verity-setup.service. Sep 10 23:28:37.001074 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:28:37.001084 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:28:37.001093 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:28:37.001103 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:28:37.001115 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:28:37.001124 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:28:37.001135 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:28:37.001145 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:28:37.001183 systemd-journald[1151]: Collecting audit messages is disabled. Sep 10 23:28:37.001206 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:28:37.001217 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:28:37.001228 systemd-journald[1151]: Journal started Sep 10 23:28:37.001248 systemd-journald[1151]: Runtime Journal (/run/log/journal/056f870e86534d0883ec212cec7dfba8) is 6M, max 48.5M, 42.4M free. Sep 10 23:28:36.790074 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:28:36.813504 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:28:36.813914 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:28:37.004835 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:28:37.005793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:28:37.006005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:28:37.007158 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:28:37.008475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:28:37.009699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:28:37.009859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:28:37.011311 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:28:37.011560 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:28:37.012589 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:28:37.012797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:28:37.014139 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:28:37.015554 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:28:37.016792 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:28:37.018123 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:28:37.029540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:28:37.031502 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:28:37.033399 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:28:37.035088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:28:37.036190 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:28:37.036226 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:28:37.037911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:28:37.045240 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:28:37.046606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:28:37.047796 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:28:37.049538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:28:37.050788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:28:37.053589 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:28:37.054469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:28:37.055455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:28:37.058122 systemd-journald[1151]: Time spent on flushing to /var/log/journal/056f870e86534d0883ec212cec7dfba8 is 20.288ms for 888 entries. Sep 10 23:28:37.058122 systemd-journald[1151]: System Journal (/var/log/journal/056f870e86534d0883ec212cec7dfba8) is 8M, max 195.6M, 187.6M free. Sep 10 23:28:37.092703 systemd-journald[1151]: Received client request to flush runtime journal. Sep 10 23:28:37.092758 kernel: loop0: detected capacity change from 0 to 119320 Sep 10 23:28:37.092776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:28:37.059824 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:28:37.063774 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:28:37.067206 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:28:37.071297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:28:37.079307 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:28:37.082634 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:28:37.087357 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:28:37.089887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:28:37.098933 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:28:37.104812 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:28:37.108043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:28:37.108501 kernel: loop1: detected capacity change from 0 to 100600 Sep 10 23:28:37.118980 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:28:37.132349 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 10 23:28:37.132676 kernel: loop2: detected capacity change from 0 to 207008 Sep 10 23:28:37.132362 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 10 23:28:37.136107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:28:37.151459 kernel: loop3: detected capacity change from 0 to 119320 Sep 10 23:28:37.158458 kernel: loop4: detected capacity change from 0 to 100600 Sep 10 23:28:37.166798 kernel: loop5: detected capacity change from 0 to 207008 Sep 10 23:28:37.173392 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:28:37.173809 (sd-merge)[1220]: Merged extensions into '/usr'. Sep 10 23:28:37.178452 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:28:37.178466 systemd[1]: Reloading... Sep 10 23:28:37.235477 zram_generator::config[1242]: No configuration found. Sep 10 23:28:37.345085 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:28:37.384187 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:28:37.384301 systemd[1]: Reloading finished in 205 ms. Sep 10 23:28:37.400935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:28:37.402277 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:28:37.415652 systemd[1]: Starting ensure-sysext.service... Sep 10 23:28:37.417237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:28:37.427239 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:28:37.427255 systemd[1]: Reloading... Sep 10 23:28:37.430412 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:28:37.430466 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:28:37.430725 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:28:37.430919 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:28:37.431536 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:28:37.431754 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 10 23:28:37.431803 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 10 23:28:37.435722 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:28:37.435733 systemd-tmpfiles[1281]: Skipping /boot Sep 10 23:28:37.441834 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:28:37.441848 systemd-tmpfiles[1281]: Skipping /boot Sep 10 23:28:37.472457 zram_generator::config[1308]: No configuration found. Sep 10 23:28:37.600967 systemd[1]: Reloading finished in 173 ms. Sep 10 23:28:37.609895 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:28:37.615115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:28:37.628461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:28:37.630673 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:28:37.647655 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:28:37.650564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:28:37.654595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:28:37.657044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:28:37.667055 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:28:37.668590 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:28:37.675666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:28:37.677644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:28:37.679861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:28:37.682832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:28:37.684259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:28:37.684453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:28:37.688969 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:28:37.692771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:28:37.694974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:28:37.695143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:28:37.696840 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:28:37.698703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:28:37.698858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:28:37.699098 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Sep 10 23:28:37.700295 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:28:37.700504 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:28:37.708236 augenrules[1380]: No rules Sep 10 23:28:37.710483 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:28:37.714685 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:28:37.715797 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:28:37.717475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:28:37.718738 systemd[1]: Finished ensure-sysext.service. Sep 10 23:28:37.724055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:28:37.725830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:28:37.730726 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:28:37.738200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:28:37.741891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:28:37.743107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:28:37.743161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:28:37.745662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:28:37.748719 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:28:37.749622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:28:37.749981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:28:37.753331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:28:37.753549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:28:37.754821 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:28:37.754990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:28:37.756298 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:28:37.756466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:28:37.757913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:28:37.758085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:28:37.764221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:28:37.764277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:28:37.786881 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:28:37.826800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:28:37.830037 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:28:37.855466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:28:37.910758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:28:37.911804 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:28:37.913153 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:28:37.922380 systemd-networkd[1424]: lo: Link UP Sep 10 23:28:37.922389 systemd-networkd[1424]: lo: Gained carrier Sep 10 23:28:37.923185 systemd-networkd[1424]: Enumeration completed Sep 10 23:28:37.923262 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:28:37.924599 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:28:37.924606 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:28:37.925276 systemd-networkd[1424]: eth0: Link UP Sep 10 23:28:37.925422 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:28:37.925567 systemd-networkd[1424]: eth0: Gained carrier Sep 10 23:28:37.925625 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:28:37.927413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:28:37.932548 systemd-resolved[1351]: Positive Trust Anchors: Sep 10 23:28:37.932574 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:28:37.932605 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:28:37.934477 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:28:37.934959 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Sep 10 23:28:37.935840 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:28:37.935890 systemd-timesyncd[1425]: Initial clock synchronization to Wed 2025-09-10 23:28:37.739456 UTC. Sep 10 23:28:37.939226 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 10 23:28:37.940637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:28:37.941637 systemd[1]: Reached target network.target - Network. Sep 10 23:28:37.942285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:28:37.945854 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:28:37.976669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:28:37.977756 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:28:37.978629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:28:37.979526 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:28:37.980559 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:28:37.981407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:28:37.982368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:28:37.983351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:28:37.983382 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:28:37.984257 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:28:37.985772 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:28:37.987711 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:28:37.990129 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:28:37.991274 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:28:37.992304 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:28:37.998150 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:28:37.999263 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:28:38.000778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:28:38.001628 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:28:38.002310 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:28:38.003074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:28:38.003105 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:28:38.003976 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:28:38.005646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:28:38.007212 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:28:38.009222 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:28:38.010990 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:28:38.011865 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:28:38.012773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:28:38.015011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:28:38.017811 jq[1472]: false Sep 10 23:28:38.018593 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:28:38.020636 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:28:38.023457 extend-filesystems[1473]: Found /dev/vda6 Sep 10 23:28:38.025719 extend-filesystems[1473]: Found /dev/vda9 Sep 10 23:28:38.025323 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:28:38.027507 extend-filesystems[1473]: Checking size of /dev/vda9 Sep 10 23:28:38.027041 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:28:38.027610 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:28:38.028384 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:28:38.031647 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:28:38.036795 extend-filesystems[1473]: Resized partition /dev/vda9 Sep 10 23:28:38.039285 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:28:38.040623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:28:38.042277 jq[1494]: true Sep 10 23:28:38.040808 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:28:38.041034 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:28:38.041174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:28:38.044313 extend-filesystems[1499]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:28:38.046838 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:28:38.047041 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:28:38.064211 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:28:38.064275 update_engine[1488]: I20250910 23:28:38.062629 1488 main.cc:92] Flatcar Update Engine starting Sep 10 23:28:38.068433 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:28:38.071485 jq[1501]: true Sep 10 23:28:38.079852 tar[1500]: linux-arm64/LICENSE Sep 10 23:28:38.080070 tar[1500]: linux-arm64/helm Sep 10 23:28:38.087254 dbus-daemon[1470]: [system] SELinux support is enabled Sep 10 23:28:38.088873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:28:38.093877 update_engine[1488]: I20250910 23:28:38.093775 1488 update_check_scheduler.cc:74] Next update check in 8m51s Sep 10 23:28:38.095605 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:28:38.096151 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:28:38.097652 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:28:38.097673 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:28:38.099021 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:28:38.105006 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:28:38.109443 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:28:38.127727 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:28:38.128485 systemd-logind[1485]: New seat seat0. Sep 10 23:28:38.129148 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:28:38.129148 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:28:38.129148 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:28:38.139189 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Sep 10 23:28:38.131979 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:28:38.141489 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:28:38.133471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:28:38.134487 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:28:38.137993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:28:38.141051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:28:38.171438 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:28:38.229451 containerd[1502]: time="2025-09-10T23:28:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:28:38.229451 containerd[1502]: time="2025-09-10T23:28:38.228176385Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.237840314Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.302µs" Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.237881907Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.237899661Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238043171Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238059636Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238082852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238127958Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238138024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238352860Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238367180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238378885Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238447 containerd[1502]: time="2025-09-10T23:28:38.238386260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:28:38.238804 containerd[1502]: time="2025-09-10T23:28:38.238782453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:28:38.239072 containerd[1502]: time="2025-09-10T23:28:38.239049105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:28:38.239168 containerd[1502]: time="2025-09-10T23:28:38.239152426Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:28:38.239215 containerd[1502]: time="2025-09-10T23:28:38.239204164Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:28:38.239309 containerd[1502]: time="2025-09-10T23:28:38.239294297Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:28:38.239682 containerd[1502]: time="2025-09-10T23:28:38.239650106Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:28:38.239808 containerd[1502]: time="2025-09-10T23:28:38.239790923Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:28:38.243163 containerd[1502]: time="2025-09-10T23:28:38.243131486Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:28:38.243308 containerd[1502]: time="2025-09-10T23:28:38.243291306Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:28:38.243367 containerd[1502]: time="2025-09-10T23:28:38.243354516Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:28:38.243448 containerd[1502]: time="2025-09-10T23:28:38.243432748Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:28:38.243512 containerd[1502]: time="2025-09-10T23:28:38.243499196Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:28:38.243563 containerd[1502]: time="2025-09-10T23:28:38.243551286Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:28:38.243632 containerd[1502]: time="2025-09-10T23:28:38.243617851Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:28:38.243725 containerd[1502]: time="2025-09-10T23:28:38.243689333Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:28:38.243785 containerd[1502]: time="2025-09-10T23:28:38.243773105Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:28:38.243834 containerd[1502]: time="2025-09-10T23:28:38.243822151Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:28:38.243880 containerd[1502]: time="2025-09-10T23:28:38.243868896Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:28:38.243935 containerd[1502]: time="2025-09-10T23:28:38.243917747Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:28:38.244090 containerd[1502]: time="2025-09-10T23:28:38.244071557Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:28:38.244157 containerd[1502]: time="2025-09-10T23:28:38.244143585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:28:38.244224 containerd[1502]: time="2025-09-10T23:28:38.244211790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:28:38.244273 containerd[1502]: time="2025-09-10T23:28:38.244262201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:28:38.244320 containerd[1502]: time="2025-09-10T23:28:38.244309180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:28:38.244374 containerd[1502]: time="2025-09-10T23:28:38.244362830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:28:38.244471 containerd[1502]: time="2025-09-10T23:28:38.244456162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:28:38.244543 containerd[1502]: time="2025-09-10T23:28:38.244530102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:28:38.244610 containerd[1502]: time="2025-09-10T23:28:38.244597019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:28:38.244656 containerd[1502]: time="2025-09-10T23:28:38.244645518Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:28:38.244704 containerd[1502]: time="2025-09-10T23:28:38.244693277Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:28:38.244948 containerd[1502]: time="2025-09-10T23:28:38.244933045Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:28:38.245012 containerd[1502]: time="2025-09-10T23:28:38.245000040Z" level=info msg="Start snapshots syncer" Sep 10 23:28:38.245085 containerd[1502]: time="2025-09-10T23:28:38.245072146Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:28:38.245372 containerd[1502]: time="2025-09-10T23:28:38.245337237Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:28:38.245570 containerd[1502]: time="2025-09-10T23:28:38.245549341Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:28:38.245702 containerd[1502]: time="2025-09-10T23:28:38.245686062Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:28:38.245866 containerd[1502]: time="2025-09-10T23:28:38.245845725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:28:38.245959 containerd[1502]: time="2025-09-10T23:28:38.245945066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:28:38.246048 containerd[1502]: time="2025-09-10T23:28:38.246034262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:28:38.246103 containerd[1502]: time="2025-09-10T23:28:38.246090644Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:28:38.246155 containerd[1502]: time="2025-09-10T23:28:38.246143982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:28:38.246203 containerd[1502]: time="2025-09-10T23:28:38.246191975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:28:38.246250 containerd[1502]: time="2025-09-10T23:28:38.246240046Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:28:38.246326 containerd[1502]: time="2025-09-10T23:28:38.246313049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:28:38.246375 containerd[1502]: time="2025-09-10T23:28:38.246364553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:28:38.246446 containerd[1502]: time="2025-09-10T23:28:38.246413170Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:28:38.246548 containerd[1502]: time="2025-09-10T23:28:38.246531513Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:28:38.246621 containerd[1502]: time="2025-09-10T23:28:38.246606858Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:28:38.246666 containerd[1502]: time="2025-09-10T23:28:38.246655046Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:28:38.246730 containerd[1502]: time="2025-09-10T23:28:38.246716617Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:28:38.246773 containerd[1502]: time="2025-09-10T23:28:38.246761683Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:28:38.246818 containerd[1502]: time="2025-09-10T23:28:38.246807062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:28:38.246865 containerd[1502]: time="2025-09-10T23:28:38.246853532Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:28:38.246990 containerd[1502]: time="2025-09-10T23:28:38.246976401Z" level=info msg="runtime interface created" Sep 10 23:28:38.247029 containerd[1502]: time="2025-09-10T23:28:38.247019829Z" level=info msg="created NRI interface" Sep 10 23:28:38.247076 containerd[1502]: time="2025-09-10T23:28:38.247064700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:28:38.247132 containerd[1502]: time="2025-09-10T23:28:38.247120926Z" level=info msg="Connect containerd service" Sep 10 23:28:38.247201 containerd[1502]: time="2025-09-10T23:28:38.247189013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:28:38.248062 containerd[1502]: time="2025-09-10T23:28:38.248036025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:28:38.320064 containerd[1502]: time="2025-09-10T23:28:38.319973059Z" level=info msg="Start subscribing containerd event" Sep 10 23:28:38.320181 containerd[1502]: time="2025-09-10T23:28:38.320068694Z" level=info msg="Start recovering state" Sep 10 23:28:38.320201 containerd[1502]: time="2025-09-10T23:28:38.320191758Z" level=info msg="Start event monitor" Sep 10 23:28:38.320218 containerd[1502]: time="2025-09-10T23:28:38.320205453Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:28:38.320257 containerd[1502]: time="2025-09-10T23:28:38.320217471Z" level=info msg="Start streaming server" Sep 10 23:28:38.320257 containerd[1502]: time="2025-09-10T23:28:38.320225782Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:28:38.320257 containerd[1502]: time="2025-09-10T23:28:38.320233195Z" level=info msg="runtime interface starting up..." Sep 10 23:28:38.320257 containerd[1502]: time="2025-09-10T23:28:38.320239009Z" level=info msg="starting plugins..." Sep 10 23:28:38.320257 containerd[1502]: time="2025-09-10T23:28:38.320255397Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:28:38.321793 containerd[1502]: time="2025-09-10T23:28:38.319997212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:28:38.321793 containerd[1502]: time="2025-09-10T23:28:38.320399648Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:28:38.321793 containerd[1502]: time="2025-09-10T23:28:38.320605743Z" level=info msg="containerd successfully booted in 0.093397s" Sep 10 23:28:38.320714 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:28:38.385276 tar[1500]: linux-arm64/README.md Sep 10 23:28:38.390973 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:28:38.406058 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:28:38.412486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:28:38.414915 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:28:38.430318 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:28:38.430571 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:28:38.433121 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:28:38.454494 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:28:38.457423 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:28:38.459595 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:28:38.460716 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:28:39.524611 systemd-networkd[1424]: eth0: Gained IPv6LL Sep 10 23:28:39.526733 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:28:39.528226 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:28:39.531680 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:28:39.533821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:28:39.546981 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:28:39.559464 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:28:39.559674 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:28:39.561226 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:28:39.566344 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:28:40.129195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:28:40.130621 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:28:40.132364 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:28:40.135554 systemd[1]: Startup finished in 2.000s (kernel) + 5.803s (initrd) + 3.783s (userspace) = 11.587s. Sep 10 23:28:40.480293 kubelet[1601]: E0910 23:28:40.480188 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:28:40.482825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:28:40.482958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:28:40.483242 systemd[1]: kubelet.service: Consumed 755ms CPU time, 255.3M memory peak. Sep 10 23:28:43.661979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:28:43.663097 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:54324.service - OpenSSH per-connection server daemon (10.0.0.1:54324). Sep 10 23:28:43.731806 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 54324 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:43.733757 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:43.743143 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:28:43.744246 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:28:43.746007 systemd-logind[1485]: New session 1 of user core. Sep 10 23:28:43.775029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:28:43.777758 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:28:43.796679 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:28:43.799113 systemd-logind[1485]: New session c1 of user core. Sep 10 23:28:43.913734 systemd[1619]: Queued start job for default target default.target. Sep 10 23:28:43.929539 systemd[1619]: Created slice app.slice - User Application Slice. Sep 10 23:28:43.929571 systemd[1619]: Reached target paths.target - Paths. Sep 10 23:28:43.929625 systemd[1619]: Reached target timers.target - Timers. Sep 10 23:28:43.930883 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:28:43.942243 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:28:43.942499 systemd[1619]: Reached target sockets.target - Sockets. Sep 10 23:28:43.942648 systemd[1619]: Reached target basic.target - Basic System. Sep 10 23:28:43.942758 systemd[1619]: Reached target default.target - Main User Target. Sep 10 23:28:43.942818 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:28:43.942908 systemd[1619]: Startup finished in 137ms. Sep 10 23:28:43.944204 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:28:44.005654 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Sep 10 23:28:44.053470 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.055004 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.059597 systemd-logind[1485]: New session 2 of user core. Sep 10 23:28:44.076629 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:28:44.127543 sshd[1633]: Connection closed by 10.0.0.1 port 54336 Sep 10 23:28:44.128010 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Sep 10 23:28:44.138510 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:54336.service: Deactivated successfully. Sep 10 23:28:44.140978 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:28:44.143022 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:28:44.145058 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:54352.service - OpenSSH per-connection server daemon (10.0.0.1:54352). Sep 10 23:28:44.146080 systemd-logind[1485]: Removed session 2. Sep 10 23:28:44.196631 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54352 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.197866 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.201961 systemd-logind[1485]: New session 3 of user core. Sep 10 23:28:44.219691 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:28:44.266946 sshd[1643]: Connection closed by 10.0.0.1 port 54352 Sep 10 23:28:44.267383 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Sep 10 23:28:44.276502 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:54352.service: Deactivated successfully. Sep 10 23:28:44.278829 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:28:44.281606 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:28:44.282850 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:54364.service - OpenSSH per-connection server daemon (10.0.0.1:54364). Sep 10 23:28:44.283721 systemd-logind[1485]: Removed session 3. Sep 10 23:28:44.344699 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 54364 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.346083 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.350114 systemd-logind[1485]: New session 4 of user core. Sep 10 23:28:44.358602 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:28:44.409516 sshd[1652]: Connection closed by 10.0.0.1 port 54364 Sep 10 23:28:44.409832 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Sep 10 23:28:44.420292 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:54364.service: Deactivated successfully. Sep 10 23:28:44.421933 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:28:44.424042 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:28:44.426113 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:54380.service - OpenSSH per-connection server daemon (10.0.0.1:54380). Sep 10 23:28:44.426742 systemd-logind[1485]: Removed session 4. Sep 10 23:28:44.479116 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 54380 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.480525 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.484818 systemd-logind[1485]: New session 5 of user core. Sep 10 23:28:44.492569 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:28:44.549405 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:28:44.549723 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:28:44.567254 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 10 23:28:44.568533 sshd[1661]: Connection closed by 10.0.0.1 port 54380 Sep 10 23:28:44.568972 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 10 23:28:44.584207 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:54380.service: Deactivated successfully. Sep 10 23:28:44.586087 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:28:44.586907 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:28:44.590335 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:54386.service - OpenSSH per-connection server daemon (10.0.0.1:54386). Sep 10 23:28:44.591177 systemd-logind[1485]: Removed session 5. Sep 10 23:28:44.638581 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 54386 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.639630 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.644006 systemd-logind[1485]: New session 6 of user core. Sep 10 23:28:44.649567 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:28:44.699674 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:28:44.699988 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:28:44.756156 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 10 23:28:44.761452 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:28:44.761809 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:28:44.770777 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:28:44.816065 augenrules[1695]: No rules Sep 10 23:28:44.816667 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:28:44.818461 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:28:44.819601 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 10 23:28:44.820817 sshd[1671]: Connection closed by 10.0.0.1 port 54386 Sep 10 23:28:44.821173 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 10 23:28:44.832128 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:54386.service: Deactivated successfully. Sep 10 23:28:44.833737 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:28:44.834513 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:28:44.836788 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Sep 10 23:28:44.837380 systemd-logind[1485]: Removed session 6. Sep 10 23:28:44.888372 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:28:44.889357 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:28:44.893013 systemd-logind[1485]: New session 7 of user core. Sep 10 23:28:44.900568 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:28:44.950917 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:28:44.951172 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:28:45.218451 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:28:45.239735 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:28:45.434528 dockerd[1728]: time="2025-09-10T23:28:45.434454929Z" level=info msg="Starting up" Sep 10 23:28:45.435454 dockerd[1728]: time="2025-09-10T23:28:45.435411977Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:28:45.445699 dockerd[1728]: time="2025-09-10T23:28:45.445660140Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 23:28:45.780064 dockerd[1728]: time="2025-09-10T23:28:45.779993685Z" level=info msg="Loading containers: start." Sep 10 23:28:45.787467 kernel: Initializing XFRM netlink socket Sep 10 23:28:46.018202 systemd-networkd[1424]: docker0: Link UP Sep 10 23:28:46.061206 dockerd[1728]: time="2025-09-10T23:28:46.061088199Z" level=info msg="Loading containers: done." Sep 10 23:28:46.072808 dockerd[1728]: time="2025-09-10T23:28:46.072769307Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:28:46.072924 dockerd[1728]: time="2025-09-10T23:28:46.072840779Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 23:28:46.072924 dockerd[1728]: time="2025-09-10T23:28:46.072911538Z" level=info msg="Initializing buildkit" Sep 10 23:28:46.093459 dockerd[1728]: time="2025-09-10T23:28:46.093407568Z" level=info msg="Completed buildkit initialization" Sep 10 23:28:46.098008 dockerd[1728]: time="2025-09-10T23:28:46.097970923Z" level=info msg="Daemon has completed initialization" Sep 10 23:28:46.098252 dockerd[1728]: time="2025-09-10T23:28:46.098036803Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:28:46.098150 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:28:46.591481 containerd[1502]: time="2025-09-10T23:28:46.591438953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 23:28:47.232568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384995163.mount: Deactivated successfully. Sep 10 23:28:48.115250 containerd[1502]: time="2025-09-10T23:28:48.115193259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:48.115638 containerd[1502]: time="2025-09-10T23:28:48.115600374Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 10 23:28:48.116483 containerd[1502]: time="2025-09-10T23:28:48.116453153Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:48.119518 containerd[1502]: time="2025-09-10T23:28:48.119483781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:48.120378 containerd[1502]: time="2025-09-10T23:28:48.120344071Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.528859925s" Sep 10 23:28:48.120412 containerd[1502]: time="2025-09-10T23:28:48.120386675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 23:28:48.121002 containerd[1502]: time="2025-09-10T23:28:48.120945563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 23:28:49.189320 containerd[1502]: time="2025-09-10T23:28:49.189262260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:49.190159 containerd[1502]: time="2025-09-10T23:28:49.190127783Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 10 23:28:49.190674 containerd[1502]: time="2025-09-10T23:28:49.190650534Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:49.193965 containerd[1502]: time="2025-09-10T23:28:49.192930104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:49.194013 containerd[1502]: time="2025-09-10T23:28:49.193961326Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.072985393s" Sep 10 23:28:49.194013 containerd[1502]: time="2025-09-10T23:28:49.193992350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 23:28:49.194625 containerd[1502]: time="2025-09-10T23:28:49.194358072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 23:28:50.193375 containerd[1502]: time="2025-09-10T23:28:50.193312630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:50.193820 containerd[1502]: time="2025-09-10T23:28:50.193773938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 10 23:28:50.194775 containerd[1502]: time="2025-09-10T23:28:50.194746189Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:50.197542 containerd[1502]: time="2025-09-10T23:28:50.197510578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:50.198442 containerd[1502]: time="2025-09-10T23:28:50.198393114Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.004004291s" Sep 10 23:28:50.198442 containerd[1502]: time="2025-09-10T23:28:50.198421692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 23:28:50.199049 containerd[1502]: time="2025-09-10T23:28:50.198852234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 23:28:50.734137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:28:50.741645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:28:50.867515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:28:50.872059 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:28:51.093267 kubelet[2022]: E0910 23:28:51.092913 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:28:51.096109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:28:51.096222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:28:51.096777 systemd[1]: kubelet.service: Consumed 314ms CPU time, 111.5M memory peak. Sep 10 23:28:51.323565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231627141.mount: Deactivated successfully. Sep 10 23:28:51.701251 containerd[1502]: time="2025-09-10T23:28:51.701199434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:51.701757 containerd[1502]: time="2025-09-10T23:28:51.701608019Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 10 23:28:51.702510 containerd[1502]: time="2025-09-10T23:28:51.702484611Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:51.704237 containerd[1502]: time="2025-09-10T23:28:51.704193309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:51.704993 containerd[1502]: time="2025-09-10T23:28:51.704956714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.506074105s" Sep 10 23:28:51.705029 containerd[1502]: time="2025-09-10T23:28:51.704990527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 23:28:51.705655 containerd[1502]: time="2025-09-10T23:28:51.705581201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:28:52.229346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113998835.mount: Deactivated successfully. Sep 10 23:28:52.930840 containerd[1502]: time="2025-09-10T23:28:52.930790269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:52.931792 containerd[1502]: time="2025-09-10T23:28:52.931518980Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:28:52.932526 containerd[1502]: time="2025-09-10T23:28:52.932496707Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:52.935095 containerd[1502]: time="2025-09-10T23:28:52.935061165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:52.936395 containerd[1502]: time="2025-09-10T23:28:52.936266427Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.230640216s" Sep 10 23:28:52.936395 containerd[1502]: time="2025-09-10T23:28:52.936303407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:28:52.937047 containerd[1502]: time="2025-09-10T23:28:52.937010760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:28:53.362477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582499004.mount: Deactivated successfully. Sep 10 23:28:53.366572 containerd[1502]: time="2025-09-10T23:28:53.366523510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:28:53.367033 containerd[1502]: time="2025-09-10T23:28:53.366999090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:28:53.368273 containerd[1502]: time="2025-09-10T23:28:53.368241682Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:28:53.370458 containerd[1502]: time="2025-09-10T23:28:53.369992147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:28:53.371465 containerd[1502]: time="2025-09-10T23:28:53.371439299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.378478ms" Sep 10 23:28:53.371563 containerd[1502]: time="2025-09-10T23:28:53.371548736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:28:53.372151 containerd[1502]: time="2025-09-10T23:28:53.372124902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 23:28:53.867911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635789636.mount: Deactivated successfully. Sep 10 23:28:55.389353 containerd[1502]: time="2025-09-10T23:28:55.389297108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:55.390659 containerd[1502]: time="2025-09-10T23:28:55.390348396Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 10 23:28:55.391455 containerd[1502]: time="2025-09-10T23:28:55.391417918Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:55.394155 containerd[1502]: time="2025-09-10T23:28:55.394125955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:28:55.395441 containerd[1502]: time="2025-09-10T23:28:55.395252053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.023092739s" Sep 10 23:28:55.395441 containerd[1502]: time="2025-09-10T23:28:55.395283414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 23:28:59.905516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:28:59.905653 systemd[1]: kubelet.service: Consumed 314ms CPU time, 111.5M memory peak. Sep 10 23:28:59.907535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:28:59.926936 systemd[1]: Reload requested from client PID 2177 ('systemctl') (unit session-7.scope)... Sep 10 23:28:59.926951 systemd[1]: Reloading... Sep 10 23:28:59.995487 zram_generator::config[2223]: No configuration found. Sep 10 23:29:00.175398 systemd[1]: Reloading finished in 248 ms. Sep 10 23:29:00.223282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:29:00.225277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:29:00.226870 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:29:00.227156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:29:00.227240 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Sep 10 23:29:00.228783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:29:00.374346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:29:00.377728 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:29:00.409462 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:29:00.409462 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:29:00.409462 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:29:00.409764 kubelet[2266]: I0910 23:29:00.409528 2266 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:29:01.072939 kubelet[2266]: I0910 23:29:01.072891 2266 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:29:01.072939 kubelet[2266]: I0910 23:29:01.072926 2266 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:29:01.073218 kubelet[2266]: I0910 23:29:01.073191 2266 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:29:01.093727 kubelet[2266]: E0910 23:29:01.093672 2266 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:29:01.096189 kubelet[2266]: I0910 23:29:01.096063 2266 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:29:01.103512 kubelet[2266]: I0910 23:29:01.103486 2266 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:29:01.106796 kubelet[2266]: I0910 23:29:01.106776 2266 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:29:01.107463 kubelet[2266]: I0910 23:29:01.107400 2266 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:29:01.107648 kubelet[2266]: I0910 23:29:01.107456 2266 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:29:01.107735 kubelet[2266]: I0910 23:29:01.107712 2266 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:29:01.107735 kubelet[2266]: I0910 23:29:01.107722 2266 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:29:01.107925 kubelet[2266]: I0910 23:29:01.107898 2266 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:29:01.110311 kubelet[2266]: I0910 23:29:01.110195 2266 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:29:01.110311 kubelet[2266]: I0910 23:29:01.110218 2266 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:29:01.110311 kubelet[2266]: I0910 23:29:01.110262 2266 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:29:01.110311 kubelet[2266]: I0910 23:29:01.110274 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:29:01.112283 kubelet[2266]: W0910 23:29:01.112229 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:29:01.112403 kubelet[2266]: E0910 23:29:01.112383 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:29:01.113154 kubelet[2266]: W0910 23:29:01.113118 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:29:01.113247 kubelet[2266]: E0910 23:29:01.113231 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:29:01.113594 kubelet[2266]: I0910 23:29:01.113578 2266 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:29:01.114234 kubelet[2266]: I0910 23:29:01.114213 2266 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:29:01.114470 kubelet[2266]: W0910 23:29:01.114456 2266 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:29:01.115366 kubelet[2266]: I0910 23:29:01.115342 2266 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:29:01.115497 kubelet[2266]: I0910 23:29:01.115484 2266 server.go:1287] "Started kubelet" Sep 10 23:29:01.115749 kubelet[2266]: I0910 23:29:01.115708 2266 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:29:01.116468 kubelet[2266]: I0910 23:29:01.116405 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:29:01.116816 kubelet[2266]: I0910 23:29:01.116795 2266 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:29:01.116919 kubelet[2266]: I0910 23:29:01.116508 2266 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:29:01.117871 kubelet[2266]: I0910 23:29:01.117845 2266 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:29:01.118289 kubelet[2266]: E0910 23:29:01.118082 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18640fa18d42d179 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:29:01.115453817 +0000 UTC m=+0.734846311,LastTimestamp:2025-09-10 23:29:01.115453817 +0000 UTC m=+0.734846311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:29:01.119338 kubelet[2266]: I0910 23:29:01.119315 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:29:01.119999 kubelet[2266]: E0910 23:29:01.119969 2266 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:29:01.119999 kubelet[2266]: I0910 23:29:01.119982 2266 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:29:01.120080 kubelet[2266]: I0910 23:29:01.120072 2266 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:29:01.120249 kubelet[2266]: I0910 23:29:01.120114 2266 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:29:01.120323 kubelet[2266]: E0910 23:29:01.120277 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Sep 10 23:29:01.120660 kubelet[2266]: W0910 23:29:01.120625 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:29:01.120820 kubelet[2266]: E0910 23:29:01.120757 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:29:01.120879 kubelet[2266]: I0910 23:29:01.120856 2266 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:29:01.120959 kubelet[2266]: I0910 23:29:01.120940 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:29:01.123445 kubelet[2266]: E0910 23:29:01.122172 2266 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:29:01.124026 kubelet[2266]: I0910 23:29:01.124004 2266 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:29:01.135068 kubelet[2266]: I0910 23:29:01.135042 2266 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:29:01.135165 kubelet[2266]: I0910 23:29:01.135154 2266 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:29:01.135352 kubelet[2266]: I0910 23:29:01.135341 2266 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:29:01.136646 kubelet[2266]: I0910 23:29:01.136614 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:29:01.137724 kubelet[2266]: I0910 23:29:01.137700 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:29:01.137772 kubelet[2266]: I0910 23:29:01.137730 2266 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:29:01.137772 kubelet[2266]: I0910 23:29:01.137750 2266 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:29:01.137772 kubelet[2266]: I0910 23:29:01.137757 2266 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:29:01.137837 kubelet[2266]: E0910 23:29:01.137795 2266 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:29:01.138304 kubelet[2266]: W0910 23:29:01.138268 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Sep 10 23:29:01.138354 kubelet[2266]: E0910 23:29:01.138316 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:29:01.171850 kubelet[2266]: I0910 23:29:01.171761 2266 policy_none.go:49] "None policy: Start" Sep 10 23:29:01.171850 kubelet[2266]: I0910 23:29:01.171793 2266 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:29:01.171850 kubelet[2266]: I0910 23:29:01.171806 2266 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:29:01.191223 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:29:01.201085 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:29:01.204133 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:29:01.221026 kubelet[2266]: E0910 23:29:01.220998 2266 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:29:01.222196 kubelet[2266]: I0910 23:29:01.222176 2266 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:29:01.222701 kubelet[2266]: I0910 23:29:01.222388 2266 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:29:01.222928 kubelet[2266]: I0910 23:29:01.222837 2266 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:29:01.223109 kubelet[2266]: I0910 23:29:01.223088 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:29:01.223929 kubelet[2266]: E0910 23:29:01.223904 2266 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:29:01.223986 kubelet[2266]: E0910 23:29:01.223949 2266 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:29:01.245186 systemd[1]: Created slice kubepods-burstable-pod205fdd2dd5d57347f74615f3efa4c3db.slice - libcontainer container kubepods-burstable-pod205fdd2dd5d57347f74615f3efa4c3db.slice. Sep 10 23:29:01.261125 kubelet[2266]: E0910 23:29:01.261090 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:01.262726 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 23:29:01.279467 kubelet[2266]: E0910 23:29:01.279415 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:01.281681 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 23:29:01.283222 kubelet[2266]: E0910 23:29:01.283202 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:01.320889 kubelet[2266]: E0910 23:29:01.320815 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Sep 10 23:29:01.322088 kubelet[2266]: I0910 23:29:01.322038 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:01.322225 kubelet[2266]: I0910 23:29:01.322074 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:01.322225 kubelet[2266]: I0910 23:29:01.322192 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:01.322409 kubelet[2266]: I0910 23:29:01.322210 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:29:01.322409 kubelet[2266]: I0910 23:29:01.322376 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:01.322569 kubelet[2266]: I0910 23:29:01.322393 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:01.322636 kubelet[2266]: I0910 23:29:01.322622 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:01.322789 kubelet[2266]: I0910 23:29:01.322723 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:01.322789 kubelet[2266]: I0910 23:29:01.322747 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:01.323961 kubelet[2266]: I0910 23:29:01.323867 2266 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:29:01.324314 kubelet[2266]: E0910 23:29:01.324280 2266 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Sep 10 23:29:01.526279 kubelet[2266]: I0910 23:29:01.526247 2266 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:29:01.526805 kubelet[2266]: E0910 23:29:01.526578 2266 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Sep 10 23:29:01.562025 kubelet[2266]: E0910 23:29:01.561997 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.562591 containerd[1502]: time="2025-09-10T23:29:01.562546279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:205fdd2dd5d57347f74615f3efa4c3db,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:01.578847 containerd[1502]: time="2025-09-10T23:29:01.578761130Z" level=info msg="connecting to shim d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded" address="unix:///run/containerd/s/0fabbe5d2624bfbdfc5844b67d1f57f0967cd258096028e589b6ebec1b19c6c6" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:01.579947 kubelet[2266]: E0910 23:29:01.579926 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.580597 containerd[1502]: time="2025-09-10T23:29:01.580561998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:01.586773 kubelet[2266]: E0910 23:29:01.586743 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.588993 containerd[1502]: time="2025-09-10T23:29:01.588956237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:01.603625 systemd[1]: Started cri-containerd-d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded.scope - libcontainer container d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded. Sep 10 23:29:01.612893 containerd[1502]: time="2025-09-10T23:29:01.612853257Z" level=info msg="connecting to shim 2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2" address="unix:///run/containerd/s/ab21138d69f1f9c14df90d7b9484001b153a07ac2c78b51702bb3417a1fbf7e9" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:01.632890 containerd[1502]: time="2025-09-10T23:29:01.632836216Z" level=info msg="connecting to shim 5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4" address="unix:///run/containerd/s/14a856f487a9f5f92f3bc9a57570837ef52e3843eb8acece9fb49f7762d5fe2c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:01.635583 systemd[1]: Started cri-containerd-2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2.scope - libcontainer container 2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2. Sep 10 23:29:01.662617 systemd[1]: Started cri-containerd-5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4.scope - libcontainer container 5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4. Sep 10 23:29:01.666115 containerd[1502]: time="2025-09-10T23:29:01.666060732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:205fdd2dd5d57347f74615f3efa4c3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded\"" Sep 10 23:29:01.667247 kubelet[2266]: E0910 23:29:01.667150 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.671467 containerd[1502]: time="2025-09-10T23:29:01.670228185Z" level=info msg="CreateContainer within sandbox \"d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:29:01.675542 containerd[1502]: time="2025-09-10T23:29:01.675505813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2\"" Sep 10 23:29:01.677260 kubelet[2266]: E0910 23:29:01.677238 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.679071 containerd[1502]: time="2025-09-10T23:29:01.679040148Z" level=info msg="CreateContainer within sandbox \"2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:29:01.682630 containerd[1502]: time="2025-09-10T23:29:01.682589585Z" level=info msg="Container 55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:01.685275 containerd[1502]: time="2025-09-10T23:29:01.685246319Z" level=info msg="Container 443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:01.693472 containerd[1502]: time="2025-09-10T23:29:01.693153672Z" level=info msg="CreateContainer within sandbox \"d6e14b285988ee76dc596f191fd9a00960e9169e31599a4b6fde4143d68e9ded\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a\"" Sep 10 23:29:01.693839 containerd[1502]: time="2025-09-10T23:29:01.693813121Z" level=info msg="StartContainer for \"55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a\"" Sep 10 23:29:01.694279 containerd[1502]: time="2025-09-10T23:29:01.694230925Z" level=info msg="CreateContainer within sandbox \"2d27ea0929a5a3d421482f7fc454da0d990cf5758af5a07a746081febcfde9b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42\"" Sep 10 23:29:01.694684 containerd[1502]: time="2025-09-10T23:29:01.694651206Z" level=info msg="StartContainer for \"443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42\"" Sep 10 23:29:01.694891 containerd[1502]: time="2025-09-10T23:29:01.694844985Z" level=info msg="connecting to shim 55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a" address="unix:///run/containerd/s/0fabbe5d2624bfbdfc5844b67d1f57f0967cd258096028e589b6ebec1b19c6c6" protocol=ttrpc version=3 Sep 10 23:29:01.695649 containerd[1502]: time="2025-09-10T23:29:01.695609834Z" level=info msg="connecting to shim 443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42" address="unix:///run/containerd/s/ab21138d69f1f9c14df90d7b9484001b153a07ac2c78b51702bb3417a1fbf7e9" protocol=ttrpc version=3 Sep 10 23:29:01.703081 containerd[1502]: time="2025-09-10T23:29:01.703012162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4\"" Sep 10 23:29:01.703965 kubelet[2266]: E0910 23:29:01.703934 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:01.706051 containerd[1502]: time="2025-09-10T23:29:01.705806260Z" level=info msg="CreateContainer within sandbox \"5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:29:01.713003 containerd[1502]: time="2025-09-10T23:29:01.712972497Z" level=info msg="Container 3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:01.718619 systemd[1]: Started cri-containerd-55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a.scope - libcontainer container 55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a. Sep 10 23:29:01.722226 kubelet[2266]: E0910 23:29:01.722191 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Sep 10 23:29:01.722983 systemd[1]: Started cri-containerd-443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42.scope - libcontainer container 443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42. Sep 10 23:29:01.723757 containerd[1502]: time="2025-09-10T23:29:01.723693006Z" level=info msg="CreateContainer within sandbox \"5562484d4569b52a22a3878292c8e46f29864c8c934afd34eb02880c09ea03f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc\"" Sep 10 23:29:01.724415 containerd[1502]: time="2025-09-10T23:29:01.724382541Z" level=info msg="StartContainer for \"3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc\"" Sep 10 23:29:01.726339 containerd[1502]: time="2025-09-10T23:29:01.726312862Z" level=info msg="connecting to shim 3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc" address="unix:///run/containerd/s/14a856f487a9f5f92f3bc9a57570837ef52e3843eb8acece9fb49f7762d5fe2c" protocol=ttrpc version=3 Sep 10 23:29:01.745561 systemd[1]: Started cri-containerd-3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc.scope - libcontainer container 3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc. Sep 10 23:29:01.774477 containerd[1502]: time="2025-09-10T23:29:01.774279146Z" level=info msg="StartContainer for \"55885cd41b2e133cad38e777d76800d685ce4b95bdac50eec00a1406d40be11a\" returns successfully" Sep 10 23:29:01.779340 containerd[1502]: time="2025-09-10T23:29:01.779239177Z" level=info msg="StartContainer for \"443ff525f77f145ee404c181be64593e22e62fbba14e3dbb4195e50d4be88e42\" returns successfully" Sep 10 23:29:01.800560 containerd[1502]: time="2025-09-10T23:29:01.800515222Z" level=info msg="StartContainer for \"3d045370ab85de6723084fcd6056d72ec28afcaa066c13edc3fb70b68a8341fc\" returns successfully" Sep 10 23:29:01.928387 kubelet[2266]: I0910 23:29:01.928286 2266 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:29:02.146221 kubelet[2266]: E0910 23:29:02.145890 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:02.146221 kubelet[2266]: E0910 23:29:02.146002 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:02.150784 kubelet[2266]: E0910 23:29:02.150762 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:02.150985 kubelet[2266]: E0910 23:29:02.150969 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:02.153833 kubelet[2266]: E0910 23:29:02.153818 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:02.154037 kubelet[2266]: E0910 23:29:02.154024 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:03.158414 kubelet[2266]: E0910 23:29:03.158373 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:03.158750 kubelet[2266]: E0910 23:29:03.158543 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:03.158851 kubelet[2266]: E0910 23:29:03.158832 2266 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:29:03.158939 kubelet[2266]: E0910 23:29:03.158928 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:03.478218 kubelet[2266]: E0910 23:29:03.478095 2266 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:29:03.557499 kubelet[2266]: I0910 23:29:03.557295 2266 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:29:03.606970 kubelet[2266]: E0910 23:29:03.606879 2266 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18640fa18d42d179 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:29:01.115453817 +0000 UTC m=+0.734846311,LastTimestamp:2025-09-10 23:29:01.115453817 +0000 UTC m=+0.734846311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:29:03.620455 kubelet[2266]: I0910 23:29:03.620258 2266 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:03.625658 kubelet[2266]: E0910 23:29:03.625634 2266 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:03.625751 kubelet[2266]: I0910 23:29:03.625742 2266 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:29:03.627286 kubelet[2266]: E0910 23:29:03.627256 2266 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:29:03.627446 kubelet[2266]: I0910 23:29:03.627358 2266 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:03.629860 kubelet[2266]: E0910 23:29:03.629563 2266 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:04.112401 kubelet[2266]: I0910 23:29:04.112361 2266 apiserver.go:52] "Watching apiserver" Sep 10 23:29:04.120869 kubelet[2266]: I0910 23:29:04.120830 2266 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:29:04.771879 kubelet[2266]: I0910 23:29:04.771843 2266 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:04.776917 kubelet[2266]: E0910 23:29:04.776861 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:05.160099 kubelet[2266]: E0910 23:29:05.160010 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:05.313718 systemd[1]: Reload requested from client PID 2540 ('systemctl') (unit session-7.scope)... Sep 10 23:29:05.313736 systemd[1]: Reloading... Sep 10 23:29:05.372483 zram_generator::config[2584]: No configuration found. Sep 10 23:29:05.542777 systemd[1]: Reloading finished in 228 ms. Sep 10 23:29:05.569981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:29:05.587874 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:29:05.588076 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:29:05.588122 systemd[1]: kubelet.service: Consumed 1.120s CPU time, 127.1M memory peak. Sep 10 23:29:05.590237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:29:05.708452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:29:05.712635 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:29:05.758693 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:29:05.758693 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:29:05.758693 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:29:05.759029 kubelet[2625]: I0910 23:29:05.758752 2625 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:29:05.765486 kubelet[2625]: I0910 23:29:05.764804 2625 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:29:05.765486 kubelet[2625]: I0910 23:29:05.764833 2625 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:29:05.765486 kubelet[2625]: I0910 23:29:05.765076 2625 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:29:05.766569 kubelet[2625]: I0910 23:29:05.766548 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:29:05.768843 kubelet[2625]: I0910 23:29:05.768808 2625 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:29:05.772770 kubelet[2625]: I0910 23:29:05.772738 2625 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:29:05.777276 kubelet[2625]: I0910 23:29:05.776488 2625 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:29:05.777276 kubelet[2625]: I0910 23:29:05.776665 2625 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:29:05.777276 kubelet[2625]: I0910 23:29:05.776685 2625 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:29:05.777276 kubelet[2625]: I0910 23:29:05.776831 2625 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.776839 2625 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.776875 2625 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.776988 2625 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.777000 2625 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.777019 2625 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.777028 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:29:05.777736 kubelet[2625]: I0910 23:29:05.777548 2625 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:29:05.778112 kubelet[2625]: I0910 23:29:05.778089 2625 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:29:05.778578 kubelet[2625]: I0910 23:29:05.778545 2625 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:29:05.778578 kubelet[2625]: I0910 23:29:05.778579 2625 server.go:1287] "Started kubelet" Sep 10 23:29:05.778709 kubelet[2625]: I0910 23:29:05.778685 2625 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:29:05.780079 kubelet[2625]: I0910 23:29:05.780053 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:29:05.780922 kubelet[2625]: I0910 23:29:05.780816 2625 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:29:05.782743 kubelet[2625]: I0910 23:29:05.778759 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:29:05.783611 kubelet[2625]: I0910 23:29:05.783589 2625 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:29:05.786779 kubelet[2625]: I0910 23:29:05.784499 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:29:05.786779 kubelet[2625]: E0910 23:29:05.785020 2625 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:29:05.786779 kubelet[2625]: I0910 23:29:05.785058 2625 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:29:05.786779 kubelet[2625]: I0910 23:29:05.785202 2625 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:29:05.786779 kubelet[2625]: I0910 23:29:05.785312 2625 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:29:05.789468 kubelet[2625]: I0910 23:29:05.789125 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:29:05.790851 kubelet[2625]: I0910 23:29:05.790516 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:29:05.790851 kubelet[2625]: I0910 23:29:05.790540 2625 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:29:05.790851 kubelet[2625]: I0910 23:29:05.790560 2625 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:29:05.790851 kubelet[2625]: I0910 23:29:05.790566 2625 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:29:05.790851 kubelet[2625]: E0910 23:29:05.790605 2625 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:29:05.803908 kubelet[2625]: I0910 23:29:05.803563 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:29:05.809459 kubelet[2625]: E0910 23:29:05.808467 2625 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:29:05.809645 kubelet[2625]: I0910 23:29:05.809619 2625 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:29:05.809645 kubelet[2625]: I0910 23:29:05.809640 2625 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:29:05.837109 kubelet[2625]: I0910 23:29:05.837077 2625 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:29:05.837109 kubelet[2625]: I0910 23:29:05.837100 2625 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:29:05.837239 kubelet[2625]: I0910 23:29:05.837121 2625 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:29:05.837292 kubelet[2625]: I0910 23:29:05.837275 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:29:05.837321 kubelet[2625]: I0910 23:29:05.837291 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:29:05.837321 kubelet[2625]: I0910 23:29:05.837309 2625 policy_none.go:49] "None policy: Start" Sep 10 23:29:05.837321 kubelet[2625]: I0910 23:29:05.837318 2625 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:29:05.837400 kubelet[2625]: I0910 23:29:05.837327 2625 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:29:05.837485 kubelet[2625]: I0910 23:29:05.837472 2625 state_mem.go:75] "Updated machine memory state" Sep 10 23:29:05.841070 kubelet[2625]: I0910 23:29:05.841036 2625 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:29:05.841200 kubelet[2625]: I0910 23:29:05.841184 2625 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:29:05.841226 kubelet[2625]: I0910 23:29:05.841201 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:29:05.842001 kubelet[2625]: I0910 23:29:05.841907 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:29:05.843014 kubelet[2625]: E0910 23:29:05.842054 2625 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:29:05.891778 kubelet[2625]: I0910 23:29:05.891737 2625 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.892025 kubelet[2625]: I0910 23:29:05.891737 2625 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:29:05.892116 kubelet[2625]: I0910 23:29:05.891762 2625 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:05.898727 kubelet[2625]: E0910 23:29:05.898677 2625 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.944970 kubelet[2625]: I0910 23:29:05.944931 2625 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:29:05.951723 kubelet[2625]: I0910 23:29:05.951670 2625 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:29:05.951866 kubelet[2625]: I0910 23:29:05.951854 2625 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:29:05.987115 kubelet[2625]: I0910 23:29:05.987080 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:29:05.987452 kubelet[2625]: I0910 23:29:05.987319 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:05.987452 kubelet[2625]: I0910 23:29:05.987381 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.987452 kubelet[2625]: I0910 23:29:05.987399 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.987592 kubelet[2625]: I0910 23:29:05.987415 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.987741 kubelet[2625]: I0910 23:29:05.987668 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:05.987741 kubelet[2625]: I0910 23:29:05.987700 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:05.987741 kubelet[2625]: I0910 23:29:05.987717 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205fdd2dd5d57347f74615f3efa4c3db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"205fdd2dd5d57347f74615f3efa4c3db\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:05.987859 kubelet[2625]: I0910 23:29:05.987731 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:29:06.199753 kubelet[2625]: E0910 23:29:06.199562 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.199753 kubelet[2625]: E0910 23:29:06.199636 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.199753 kubelet[2625]: E0910 23:29:06.199687 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.314672 sudo[2660]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 23:29:06.314932 sudo[2660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 23:29:06.625386 sudo[2660]: pam_unix(sudo:session): session closed for user root Sep 10 23:29:06.777457 kubelet[2625]: I0910 23:29:06.777219 2625 apiserver.go:52] "Watching apiserver" Sep 10 23:29:06.786244 kubelet[2625]: I0910 23:29:06.786217 2625 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:29:06.822144 kubelet[2625]: I0910 23:29:06.822074 2625 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:06.822250 kubelet[2625]: E0910 23:29:06.822238 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.822442 kubelet[2625]: E0910 23:29:06.822126 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.834168 kubelet[2625]: E0910 23:29:06.834133 2625 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:29:06.834311 kubelet[2625]: E0910 23:29:06.834295 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:06.854615 kubelet[2625]: I0910 23:29:06.854558 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.854545475 podStartE2EDuration="1.854545475s" podCreationTimestamp="2025-09-10 23:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:06.85362877 +0000 UTC m=+1.137691158" watchObservedRunningTime="2025-09-10 23:29:06.854545475 +0000 UTC m=+1.138607863" Sep 10 23:29:06.898820 kubelet[2625]: I0910 23:29:06.898379 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.89836237 podStartE2EDuration="2.89836237s" podCreationTimestamp="2025-09-10 23:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:06.885616373 +0000 UTC m=+1.169678841" watchObservedRunningTime="2025-09-10 23:29:06.89836237 +0000 UTC m=+1.182424758" Sep 10 23:29:06.907926 kubelet[2625]: I0910 23:29:06.907733 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9077161089999999 podStartE2EDuration="1.907716109s" podCreationTimestamp="2025-09-10 23:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:06.898677147 +0000 UTC m=+1.182739535" watchObservedRunningTime="2025-09-10 23:29:06.907716109 +0000 UTC m=+1.191778497" Sep 10 23:29:07.829225 kubelet[2625]: E0910 23:29:07.829177 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:07.829716 kubelet[2625]: E0910 23:29:07.829392 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:07.830987 kubelet[2625]: E0910 23:29:07.830950 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:08.576573 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 10 23:29:08.577635 sshd[1707]: Connection closed by 10.0.0.1 port 54402 Sep 10 23:29:08.580850 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:08.583736 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:54402.service: Deactivated successfully. Sep 10 23:29:08.586068 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:29:08.586378 systemd[1]: session-7.scope: Consumed 6.839s CPU time, 259.1M memory peak. Sep 10 23:29:08.589106 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:29:08.589879 systemd-logind[1485]: Removed session 7. Sep 10 23:29:09.078209 kubelet[2625]: E0910 23:29:09.076656 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:12.943467 kubelet[2625]: I0910 23:29:12.943414 2625 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:29:12.944158 containerd[1502]: time="2025-09-10T23:29:12.944056566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:29:12.944579 kubelet[2625]: I0910 23:29:12.944220 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:29:14.005340 systemd[1]: Created slice kubepods-besteffort-podb67b899b_ae72_42b8_8050_4317dad5a183.slice - libcontainer container kubepods-besteffort-podb67b899b_ae72_42b8_8050_4317dad5a183.slice. Sep 10 23:29:14.024553 systemd[1]: Created slice kubepods-burstable-pod01cab590_19c9_419a_af1c_564072054707.slice - libcontainer container kubepods-burstable-pod01cab590_19c9_419a_af1c_564072054707.slice. Sep 10 23:29:14.041723 kubelet[2625]: I0910 23:29:14.041673 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cab590-19c9-419a-af1c-564072054707-cilium-config-path\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.041723 kubelet[2625]: I0910 23:29:14.041720 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cni-path\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041740 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b67b899b-ae72-42b8-8050-4317dad5a183-lib-modules\") pod \"kube-proxy-fth4m\" (UID: \"b67b899b-ae72-42b8-8050-4317dad5a183\") " pod="kube-system/kube-proxy-fth4m" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041755 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-run\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041768 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-bpf-maps\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041785 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs469\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-kube-api-access-bs469\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041803 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b67b899b-ae72-42b8-8050-4317dad5a183-kube-proxy\") pod \"kube-proxy-fth4m\" (UID: \"b67b899b-ae72-42b8-8050-4317dad5a183\") " pod="kube-system/kube-proxy-fth4m" Sep 10 23:29:14.042066 kubelet[2625]: I0910 23:29:14.041817 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-cgroup\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042201 kubelet[2625]: I0910 23:29:14.041831 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-etc-cni-netd\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042201 kubelet[2625]: I0910 23:29:14.041846 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-lib-modules\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042201 kubelet[2625]: I0910 23:29:14.041860 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-net\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042201 kubelet[2625]: I0910 23:29:14.041877 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79qmj\" (UniqueName: \"kubernetes.io/projected/b67b899b-ae72-42b8-8050-4317dad5a183-kube-api-access-79qmj\") pod \"kube-proxy-fth4m\" (UID: \"b67b899b-ae72-42b8-8050-4317dad5a183\") " pod="kube-system/kube-proxy-fth4m" Sep 10 23:29:14.042201 kubelet[2625]: I0910 23:29:14.041892 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01cab590-19c9-419a-af1c-564072054707-clustermesh-secrets\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042305 kubelet[2625]: I0910 23:29:14.041907 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-kernel\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042305 kubelet[2625]: I0910 23:29:14.041922 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-hubble-tls\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042305 kubelet[2625]: I0910 23:29:14.041938 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b67b899b-ae72-42b8-8050-4317dad5a183-xtables-lock\") pod \"kube-proxy-fth4m\" (UID: \"b67b899b-ae72-42b8-8050-4317dad5a183\") " pod="kube-system/kube-proxy-fth4m" Sep 10 23:29:14.042305 kubelet[2625]: I0910 23:29:14.041962 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-hostproc\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.042305 kubelet[2625]: I0910 23:29:14.041980 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-xtables-lock\") pod \"cilium-f8zjs\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " pod="kube-system/cilium-f8zjs" Sep 10 23:29:14.059244 systemd[1]: Created slice kubepods-besteffort-pod349b8366_ee22_4ec2_9ae9_12cc4ab43318.slice - libcontainer container kubepods-besteffort-pod349b8366_ee22_4ec2_9ae9_12cc4ab43318.slice. Sep 10 23:29:14.143027 kubelet[2625]: I0910 23:29:14.142970 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfz4v\" (UniqueName: \"kubernetes.io/projected/349b8366-ee22-4ec2-9ae9-12cc4ab43318-kube-api-access-tfz4v\") pod \"cilium-operator-6c4d7847fc-b6hj8\" (UID: \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\") " pod="kube-system/cilium-operator-6c4d7847fc-b6hj8" Sep 10 23:29:14.143151 kubelet[2625]: I0910 23:29:14.143046 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349b8366-ee22-4ec2-9ae9-12cc4ab43318-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-b6hj8\" (UID: \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\") " pod="kube-system/cilium-operator-6c4d7847fc-b6hj8" Sep 10 23:29:14.320642 kubelet[2625]: E0910 23:29:14.320504 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.321472 containerd[1502]: time="2025-09-10T23:29:14.321353520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fth4m,Uid:b67b899b-ae72-42b8-8050-4317dad5a183,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:14.333076 kubelet[2625]: E0910 23:29:14.333038 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.333784 containerd[1502]: time="2025-09-10T23:29:14.333752726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8zjs,Uid:01cab590-19c9-419a-af1c-564072054707,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:14.346726 containerd[1502]: time="2025-09-10T23:29:14.346611096Z" level=info msg="connecting to shim 3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf" address="unix:///run/containerd/s/3cf12132bd13d27f3aac6dd6110f99d552c9b0125a5a2a1bc4fab95719c498b7" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:14.357599 containerd[1502]: time="2025-09-10T23:29:14.357558452Z" level=info msg="connecting to shim bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:14.366767 kubelet[2625]: E0910 23:29:14.366739 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.368663 containerd[1502]: time="2025-09-10T23:29:14.368622809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b6hj8,Uid:349b8366-ee22-4ec2-9ae9-12cc4ab43318,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:14.371616 systemd[1]: Started cri-containerd-3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf.scope - libcontainer container 3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf. Sep 10 23:29:14.391629 systemd[1]: Started cri-containerd-bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5.scope - libcontainer container bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5. Sep 10 23:29:14.396622 containerd[1502]: time="2025-09-10T23:29:14.396564163Z" level=info msg="connecting to shim 72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24" address="unix:///run/containerd/s/f79db9a0d114458c40da2f24a85fcfe48f949c7d76ec90896beb0fafdf09634a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:14.410812 containerd[1502]: time="2025-09-10T23:29:14.410773622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fth4m,Uid:b67b899b-ae72-42b8-8050-4317dad5a183,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf\"" Sep 10 23:29:14.411661 kubelet[2625]: E0910 23:29:14.411637 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.418222 containerd[1502]: time="2025-09-10T23:29:14.417672030Z" level=info msg="CreateContainer within sandbox \"3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:29:14.429584 containerd[1502]: time="2025-09-10T23:29:14.429542593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8zjs,Uid:01cab590-19c9-419a-af1c-564072054707,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\"" Sep 10 23:29:14.430135 kubelet[2625]: E0910 23:29:14.430095 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.430604 systemd[1]: Started cri-containerd-72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24.scope - libcontainer container 72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24. Sep 10 23:29:14.431641 containerd[1502]: time="2025-09-10T23:29:14.430975083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 23:29:14.434297 containerd[1502]: time="2025-09-10T23:29:14.434211865Z" level=info msg="Container fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:14.449229 containerd[1502]: time="2025-09-10T23:29:14.449180609Z" level=info msg="CreateContainer within sandbox \"3c61b0e6be4479ebade99387fb018bb7db71f6870db6f17843e7e3c21303febf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7\"" Sep 10 23:29:14.450027 containerd[1502]: time="2025-09-10T23:29:14.449966775Z" level=info msg="StartContainer for \"fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7\"" Sep 10 23:29:14.452098 containerd[1502]: time="2025-09-10T23:29:14.452053429Z" level=info msg="connecting to shim fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7" address="unix:///run/containerd/s/3cf12132bd13d27f3aac6dd6110f99d552c9b0125a5a2a1bc4fab95719c498b7" protocol=ttrpc version=3 Sep 10 23:29:14.472662 systemd[1]: Started cri-containerd-fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7.scope - libcontainer container fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7. Sep 10 23:29:14.478212 containerd[1502]: time="2025-09-10T23:29:14.478164931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b6hj8,Uid:349b8366-ee22-4ec2-9ae9-12cc4ab43318,Namespace:kube-system,Attempt:0,} returns sandbox id \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\"" Sep 10 23:29:14.478902 kubelet[2625]: E0910 23:29:14.478881 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.509519 containerd[1502]: time="2025-09-10T23:29:14.509477389Z" level=info msg="StartContainer for \"fac4f6578bb7fe105c04d1ebf42dcde750506d35c65b889329eeb9f6c9b538c7\" returns successfully" Sep 10 23:29:14.844587 kubelet[2625]: E0910 23:29:14.844558 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:14.857906 kubelet[2625]: I0910 23:29:14.857824 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fth4m" podStartSLOduration=1.8578047720000002 podStartE2EDuration="1.857804772s" podCreationTimestamp="2025-09-10 23:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:14.857651291 +0000 UTC m=+9.141713639" watchObservedRunningTime="2025-09-10 23:29:14.857804772 +0000 UTC m=+9.141867160" Sep 10 23:29:15.332192 kubelet[2625]: E0910 23:29:15.331756 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:15.849597 kubelet[2625]: E0910 23:29:15.849556 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:16.852482 kubelet[2625]: E0910 23:29:16.852451 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:17.090968 kubelet[2625]: E0910 23:29:17.090933 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:19.087691 kubelet[2625]: E0910 23:29:19.087649 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:23.558964 update_engine[1488]: I20250910 23:29:23.558569 1488 update_attempter.cc:509] Updating boot flags... Sep 10 23:29:35.304350 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:34638.service - OpenSSH per-connection server daemon (10.0.0.1:34638). Sep 10 23:29:35.346616 sshd[3025]: Accepted publickey for core from 10.0.0.1 port 34638 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:35.348491 sshd-session[3025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:35.354800 systemd-logind[1485]: New session 8 of user core. Sep 10 23:29:35.367583 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:29:35.505676 sshd[3028]: Connection closed by 10.0.0.1 port 34638 Sep 10 23:29:35.505986 sshd-session[3025]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:35.509511 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:34638.service: Deactivated successfully. Sep 10 23:29:35.511069 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:29:35.511796 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:29:35.512900 systemd-logind[1485]: Removed session 8. Sep 10 23:29:36.281491 kernel: hrtimer: interrupt took 4368610 ns Sep 10 23:29:37.552853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094208743.mount: Deactivated successfully. Sep 10 23:29:38.956042 containerd[1502]: time="2025-09-10T23:29:38.955992103Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:29:38.956740 containerd[1502]: time="2025-09-10T23:29:38.956697345Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 23:29:38.958108 containerd[1502]: time="2025-09-10T23:29:38.958065588Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:29:38.959393 containerd[1502]: time="2025-09-10T23:29:38.959354031Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 24.528346428s" Sep 10 23:29:38.959449 containerd[1502]: time="2025-09-10T23:29:38.959391351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 23:29:38.965682 containerd[1502]: time="2025-09-10T23:29:38.965491364Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 23:29:38.967835 containerd[1502]: time="2025-09-10T23:29:38.967760209Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:29:38.985562 containerd[1502]: time="2025-09-10T23:29:38.982529962Z" level=info msg="Container 14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:38.990304 containerd[1502]: time="2025-09-10T23:29:38.990265939Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\"" Sep 10 23:29:38.991446 containerd[1502]: time="2025-09-10T23:29:38.991294501Z" level=info msg="StartContainer for \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\"" Sep 10 23:29:38.996709 containerd[1502]: time="2025-09-10T23:29:38.996675473Z" level=info msg="connecting to shim 14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" protocol=ttrpc version=3 Sep 10 23:29:39.048656 systemd[1]: Started cri-containerd-14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5.scope - libcontainer container 14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5. Sep 10 23:29:39.101031 containerd[1502]: time="2025-09-10T23:29:39.100990654Z" level=info msg="StartContainer for \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" returns successfully" Sep 10 23:29:39.114207 systemd[1]: cri-containerd-14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5.scope: Deactivated successfully. Sep 10 23:29:39.142450 containerd[1502]: time="2025-09-10T23:29:39.142245702Z" level=info msg="received exit event container_id:\"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" id:\"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" pid:3083 exited_at:{seconds:1757546979 nanos:137634852}" Sep 10 23:29:39.142450 containerd[1502]: time="2025-09-10T23:29:39.142322102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" id:\"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" pid:3083 exited_at:{seconds:1757546979 nanos:137634852}" Sep 10 23:29:39.182868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5-rootfs.mount: Deactivated successfully. Sep 10 23:29:39.897915 kubelet[2625]: E0910 23:29:39.897863 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:39.903493 containerd[1502]: time="2025-09-10T23:29:39.903396155Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:29:39.998302 containerd[1502]: time="2025-09-10T23:29:39.998256676Z" level=info msg="Container 13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:40.009787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741099838.mount: Deactivated successfully. Sep 10 23:29:40.029687 containerd[1502]: time="2025-09-10T23:29:40.029367780Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\"" Sep 10 23:29:40.030356 containerd[1502]: time="2025-09-10T23:29:40.030190822Z" level=info msg="StartContainer for \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\"" Sep 10 23:29:40.034262 containerd[1502]: time="2025-09-10T23:29:40.034043989Z" level=info msg="connecting to shim 13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" protocol=ttrpc version=3 Sep 10 23:29:40.075607 systemd[1]: Started cri-containerd-13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67.scope - libcontainer container 13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67. Sep 10 23:29:40.103324 containerd[1502]: time="2025-09-10T23:29:40.103258371Z" level=info msg="StartContainer for \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" returns successfully" Sep 10 23:29:40.114818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:29:40.115740 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:29:40.115961 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:29:40.118536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:29:40.120535 containerd[1502]: time="2025-09-10T23:29:40.120491806Z" level=info msg="received exit event container_id:\"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" id:\"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" pid:3137 exited_at:{seconds:1757546980 nanos:120294286}" Sep 10 23:29:40.120528 systemd[1]: cri-containerd-13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67.scope: Deactivated successfully. Sep 10 23:29:40.121075 containerd[1502]: time="2025-09-10T23:29:40.121015087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" id:\"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" pid:3137 exited_at:{seconds:1757546980 nanos:120294286}" Sep 10 23:29:40.158291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:29:40.414598 containerd[1502]: time="2025-09-10T23:29:40.414492487Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:29:40.415078 containerd[1502]: time="2025-09-10T23:29:40.415030688Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 23:29:40.416311 containerd[1502]: time="2025-09-10T23:29:40.416268651Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:29:40.418127 containerd[1502]: time="2025-09-10T23:29:40.418087095Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.452531051s" Sep 10 23:29:40.418127 containerd[1502]: time="2025-09-10T23:29:40.418123095Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 23:29:40.420790 containerd[1502]: time="2025-09-10T23:29:40.420201779Z" level=info msg="CreateContainer within sandbox \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 23:29:40.426135 containerd[1502]: time="2025-09-10T23:29:40.426091591Z" level=info msg="Container f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:40.432216 containerd[1502]: time="2025-09-10T23:29:40.432151523Z" level=info msg="CreateContainer within sandbox \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\"" Sep 10 23:29:40.432987 containerd[1502]: time="2025-09-10T23:29:40.432696405Z" level=info msg="StartContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\"" Sep 10 23:29:40.434699 containerd[1502]: time="2025-09-10T23:29:40.434291048Z" level=info msg="connecting to shim f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a" address="unix:///run/containerd/s/f79db9a0d114458c40da2f24a85fcfe48f949c7d76ec90896beb0fafdf09634a" protocol=ttrpc version=3 Sep 10 23:29:40.467673 systemd[1]: Started cri-containerd-f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a.scope - libcontainer container f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a. Sep 10 23:29:40.491497 containerd[1502]: time="2025-09-10T23:29:40.491452565Z" level=info msg="StartContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" returns successfully" Sep 10 23:29:40.521785 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). Sep 10 23:29:40.592459 sshd[3214]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:40.593908 sshd-session[3214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:40.604573 systemd-logind[1485]: New session 9 of user core. Sep 10 23:29:40.609621 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:29:40.735550 sshd[3219]: Connection closed by 10.0.0.1 port 54846 Sep 10 23:29:40.735416 sshd-session[3214]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:40.739369 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:54846.service: Deactivated successfully. Sep 10 23:29:40.741332 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:29:40.742630 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:29:40.743343 systemd-logind[1485]: Removed session 9. Sep 10 23:29:40.901691 kubelet[2625]: E0910 23:29:40.901100 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:40.908262 kubelet[2625]: E0910 23:29:40.907935 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:40.909829 containerd[1502]: time="2025-09-10T23:29:40.909680540Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:29:40.918897 kubelet[2625]: I0910 23:29:40.918759 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-b6hj8" podStartSLOduration=0.979166601 podStartE2EDuration="26.918743158s" podCreationTimestamp="2025-09-10 23:29:14 +0000 UTC" firstStartedPulling="2025-09-10 23:29:14.479355619 +0000 UTC m=+8.763417967" lastFinishedPulling="2025-09-10 23:29:40.418932136 +0000 UTC m=+34.702994524" observedRunningTime="2025-09-10 23:29:40.918469118 +0000 UTC m=+35.202531506" watchObservedRunningTime="2025-09-10 23:29:40.918743158 +0000 UTC m=+35.202805546" Sep 10 23:29:40.952229 containerd[1502]: time="2025-09-10T23:29:40.951787986Z" level=info msg="Container e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:40.960676 containerd[1502]: time="2025-09-10T23:29:40.960616164Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\"" Sep 10 23:29:40.961254 containerd[1502]: time="2025-09-10T23:29:40.961222845Z" level=info msg="StartContainer for \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\"" Sep 10 23:29:40.962873 containerd[1502]: time="2025-09-10T23:29:40.962839768Z" level=info msg="connecting to shim e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" protocol=ttrpc version=3 Sep 10 23:29:40.982632 systemd[1]: Started cri-containerd-e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b.scope - libcontainer container e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b. Sep 10 23:29:40.996646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67-rootfs.mount: Deactivated successfully. Sep 10 23:29:41.081614 systemd[1]: cri-containerd-e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b.scope: Deactivated successfully. Sep 10 23:29:41.084047 containerd[1502]: time="2025-09-10T23:29:41.082604848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" id:\"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" pid:3246 exited_at:{seconds:1757546981 nanos:82139447}" Sep 10 23:29:41.110854 containerd[1502]: time="2025-09-10T23:29:41.110796543Z" level=info msg="received exit event container_id:\"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" id:\"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" pid:3246 exited_at:{seconds:1757546981 nanos:82139447}" Sep 10 23:29:41.118791 containerd[1502]: time="2025-09-10T23:29:41.118746719Z" level=info msg="StartContainer for \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" returns successfully" Sep 10 23:29:41.131396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b-rootfs.mount: Deactivated successfully. Sep 10 23:29:41.913249 kubelet[2625]: E0910 23:29:41.913214 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:41.913690 kubelet[2625]: E0910 23:29:41.913289 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:41.915736 containerd[1502]: time="2025-09-10T23:29:41.915703252Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:29:41.929592 containerd[1502]: time="2025-09-10T23:29:41.929525439Z" level=info msg="Container 9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:41.943763 containerd[1502]: time="2025-09-10T23:29:41.943688067Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\"" Sep 10 23:29:41.946421 containerd[1502]: time="2025-09-10T23:29:41.946383993Z" level=info msg="StartContainer for \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\"" Sep 10 23:29:41.949168 containerd[1502]: time="2025-09-10T23:29:41.949131118Z" level=info msg="connecting to shim 9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" protocol=ttrpc version=3 Sep 10 23:29:41.973834 systemd[1]: Started cri-containerd-9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f.scope - libcontainer container 9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f. Sep 10 23:29:42.003054 systemd[1]: cri-containerd-9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f.scope: Deactivated successfully. Sep 10 23:29:42.005209 containerd[1502]: time="2025-09-10T23:29:42.005171988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" id:\"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" pid:3285 exited_at:{seconds:1757546982 nanos:4841828}" Sep 10 23:29:42.005326 containerd[1502]: time="2025-09-10T23:29:42.005293509Z" level=info msg="received exit event container_id:\"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" id:\"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" pid:3285 exited_at:{seconds:1757546982 nanos:4841828}" Sep 10 23:29:42.021211 containerd[1502]: time="2025-09-10T23:29:42.021176939Z" level=info msg="StartContainer for \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" returns successfully" Sep 10 23:29:42.031053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f-rootfs.mount: Deactivated successfully. Sep 10 23:29:42.923670 kubelet[2625]: E0910 23:29:42.923640 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:42.930081 containerd[1502]: time="2025-09-10T23:29:42.929978153Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:29:42.945266 containerd[1502]: time="2025-09-10T23:29:42.944543741Z" level=info msg="Container fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:42.948197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125656663.mount: Deactivated successfully. Sep 10 23:29:42.957469 containerd[1502]: time="2025-09-10T23:29:42.957066925Z" level=info msg="CreateContainer within sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\"" Sep 10 23:29:42.958684 containerd[1502]: time="2025-09-10T23:29:42.958501287Z" level=info msg="StartContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\"" Sep 10 23:29:42.960723 containerd[1502]: time="2025-09-10T23:29:42.960691131Z" level=info msg="connecting to shim fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa" address="unix:///run/containerd/s/57ce03174c82636189609f96682cbc5218e11f3f16eba1a98b6f39c949fa8301" protocol=ttrpc version=3 Sep 10 23:29:42.984601 systemd[1]: Started cri-containerd-fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa.scope - libcontainer container fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa. Sep 10 23:29:43.016161 containerd[1502]: time="2025-09-10T23:29:43.016115476Z" level=info msg="StartContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" returns successfully" Sep 10 23:29:43.109145 containerd[1502]: time="2025-09-10T23:29:43.109070088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" id:\"ba902f99352b595a4eb8b0356573fac8a039c8d2bf38f69c2432819d469fd721\" pid:3354 exited_at:{seconds:1757546983 nanos:108750847}" Sep 10 23:29:43.201515 kubelet[2625]: I0910 23:29:43.201399 2625 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:29:43.243160 systemd[1]: Created slice kubepods-burstable-pod7f45b89d_021e_4e58_89cd_3a8af9e809a8.slice - libcontainer container kubepods-burstable-pod7f45b89d_021e_4e58_89cd_3a8af9e809a8.slice. Sep 10 23:29:43.249326 systemd[1]: Created slice kubepods-burstable-pod1e649a28_5b60_4fb4_b186_460b22984042.slice - libcontainer container kubepods-burstable-pod1e649a28_5b60_4fb4_b186_460b22984042.slice. Sep 10 23:29:43.352252 kubelet[2625]: I0910 23:29:43.352119 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpx9\" (UniqueName: \"kubernetes.io/projected/7f45b89d-021e-4e58-89cd-3a8af9e809a8-kube-api-access-jzpx9\") pod \"coredns-668d6bf9bc-ktd6s\" (UID: \"7f45b89d-021e-4e58-89cd-3a8af9e809a8\") " pod="kube-system/coredns-668d6bf9bc-ktd6s" Sep 10 23:29:43.352755 kubelet[2625]: I0910 23:29:43.352683 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e649a28-5b60-4fb4-b186-460b22984042-config-volume\") pod \"coredns-668d6bf9bc-srnrg\" (UID: \"1e649a28-5b60-4fb4-b186-460b22984042\") " pod="kube-system/coredns-668d6bf9bc-srnrg" Sep 10 23:29:43.352755 kubelet[2625]: I0910 23:29:43.352717 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvzt\" (UniqueName: \"kubernetes.io/projected/1e649a28-5b60-4fb4-b186-460b22984042-kube-api-access-8hvzt\") pod \"coredns-668d6bf9bc-srnrg\" (UID: \"1e649a28-5b60-4fb4-b186-460b22984042\") " pod="kube-system/coredns-668d6bf9bc-srnrg" Sep 10 23:29:43.353105 kubelet[2625]: I0910 23:29:43.353062 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f45b89d-021e-4e58-89cd-3a8af9e809a8-config-volume\") pod \"coredns-668d6bf9bc-ktd6s\" (UID: \"7f45b89d-021e-4e58-89cd-3a8af9e809a8\") " pod="kube-system/coredns-668d6bf9bc-ktd6s" Sep 10 23:29:43.547224 kubelet[2625]: E0910 23:29:43.547101 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:43.547937 containerd[1502]: time="2025-09-10T23:29:43.547902938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktd6s,Uid:7f45b89d-021e-4e58-89cd-3a8af9e809a8,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:43.553509 kubelet[2625]: E0910 23:29:43.553473 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:43.554317 containerd[1502]: time="2025-09-10T23:29:43.554288630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srnrg,Uid:1e649a28-5b60-4fb4-b186-460b22984042,Namespace:kube-system,Attempt:0,}" Sep 10 23:29:43.930870 kubelet[2625]: E0910 23:29:43.930605 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:43.948099 kubelet[2625]: I0910 23:29:43.948012 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f8zjs" podStartSLOduration=6.413309193 podStartE2EDuration="30.947994197s" podCreationTimestamp="2025-09-10 23:29:13 +0000 UTC" firstStartedPulling="2025-09-10 23:29:14.43059548 +0000 UTC m=+8.714657828" lastFinishedPulling="2025-09-10 23:29:38.965280444 +0000 UTC m=+33.249342832" observedRunningTime="2025-09-10 23:29:43.947256715 +0000 UTC m=+38.231319103" watchObservedRunningTime="2025-09-10 23:29:43.947994197 +0000 UTC m=+38.232056585" Sep 10 23:29:44.684999 systemd-networkd[1424]: cilium_host: Link UP Sep 10 23:29:44.685487 systemd-networkd[1424]: cilium_net: Link UP Sep 10 23:29:44.685624 systemd-networkd[1424]: cilium_host: Gained carrier Sep 10 23:29:44.685729 systemd-networkd[1424]: cilium_net: Gained carrier Sep 10 23:29:44.766786 systemd-networkd[1424]: cilium_vxlan: Link UP Sep 10 23:29:44.766791 systemd-networkd[1424]: cilium_vxlan: Gained carrier Sep 10 23:29:44.935732 kubelet[2625]: E0910 23:29:44.935626 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:45.041469 kernel: NET: Registered PF_ALG protocol family Sep 10 23:29:45.172610 systemd-networkd[1424]: cilium_host: Gained IPv6LL Sep 10 23:29:45.444619 systemd-networkd[1424]: cilium_net: Gained IPv6LL Sep 10 23:29:45.616260 systemd-networkd[1424]: lxc_health: Link UP Sep 10 23:29:45.617861 systemd-networkd[1424]: lxc_health: Gained carrier Sep 10 23:29:45.750137 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:54850.service - OpenSSH per-connection server daemon (10.0.0.1:54850). Sep 10 23:29:45.804534 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 54850 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:45.807576 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:45.815725 systemd-logind[1485]: New session 10 of user core. Sep 10 23:29:45.826644 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:29:45.936768 kubelet[2625]: E0910 23:29:45.936729 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:45.958472 sshd[3814]: Connection closed by 10.0.0.1 port 54850 Sep 10 23:29:45.958928 sshd-session[3811]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:45.962330 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:54850.service: Deactivated successfully. Sep 10 23:29:45.964451 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:29:45.965590 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:29:45.967294 systemd-logind[1485]: Removed session 10. Sep 10 23:29:46.084647 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL Sep 10 23:29:46.120468 kernel: eth0: renamed from tmp22a44 Sep 10 23:29:46.121488 kernel: eth0: renamed from tmp8e3fe Sep 10 23:29:46.122681 systemd-networkd[1424]: lxcef3b0eca4f00: Link UP Sep 10 23:29:46.123206 systemd-networkd[1424]: lxca7b8cfc927a4: Link UP Sep 10 23:29:46.126255 systemd-networkd[1424]: lxca7b8cfc927a4: Gained carrier Sep 10 23:29:46.130418 systemd-networkd[1424]: lxcef3b0eca4f00: Gained carrier Sep 10 23:29:46.938140 kubelet[2625]: E0910 23:29:46.938109 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:47.044649 systemd-networkd[1424]: lxc_health: Gained IPv6LL Sep 10 23:29:47.556666 systemd-networkd[1424]: lxca7b8cfc927a4: Gained IPv6LL Sep 10 23:29:47.876656 systemd-networkd[1424]: lxcef3b0eca4f00: Gained IPv6LL Sep 10 23:29:47.940500 kubelet[2625]: E0910 23:29:47.940473 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:48.942078 kubelet[2625]: E0910 23:29:48.942033 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:49.668861 containerd[1502]: time="2025-09-10T23:29:49.668810196Z" level=info msg="connecting to shim 8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02" address="unix:///run/containerd/s/213759d25bbce4c87e35c12ce50d24f87b6f8fbda02787951d6adf2e591a481d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:49.670625 containerd[1502]: time="2025-09-10T23:29:49.670597638Z" level=info msg="connecting to shim 22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d" address="unix:///run/containerd/s/a16a497645fc8b8272e9187b8186171b715189f2aa417cac177dd686c5fb2a39" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:29:49.694587 systemd[1]: Started cri-containerd-22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d.scope - libcontainer container 22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d. Sep 10 23:29:49.698170 systemd[1]: Started cri-containerd-8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02.scope - libcontainer container 8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02. Sep 10 23:29:49.708524 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:29:49.712358 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:29:49.737098 containerd[1502]: time="2025-09-10T23:29:49.737058421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktd6s,Uid:7f45b89d-021e-4e58-89cd-3a8af9e809a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d\"" Sep 10 23:29:49.738792 kubelet[2625]: E0910 23:29:49.738612 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:49.741996 containerd[1502]: time="2025-09-10T23:29:49.741952189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srnrg,Uid:1e649a28-5b60-4fb4-b186-460b22984042,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02\"" Sep 10 23:29:49.742912 kubelet[2625]: E0910 23:29:49.742888 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:49.743920 containerd[1502]: time="2025-09-10T23:29:49.743893472Z" level=info msg="CreateContainer within sandbox \"22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:29:49.746362 containerd[1502]: time="2025-09-10T23:29:49.746320476Z" level=info msg="CreateContainer within sandbox \"8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:29:49.756461 containerd[1502]: time="2025-09-10T23:29:49.756407771Z" level=info msg="Container ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:49.762793 containerd[1502]: time="2025-09-10T23:29:49.762753661Z" level=info msg="CreateContainer within sandbox \"22a446d6221f4198e86050d3da135d7da1b189ef263b42e87cf0e5ee5971f14d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186\"" Sep 10 23:29:49.763825 containerd[1502]: time="2025-09-10T23:29:49.763753583Z" level=info msg="StartContainer for \"ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186\"" Sep 10 23:29:49.764926 containerd[1502]: time="2025-09-10T23:29:49.764903064Z" level=info msg="connecting to shim ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186" address="unix:///run/containerd/s/a16a497645fc8b8272e9187b8186171b715189f2aa417cac177dd686c5fb2a39" protocol=ttrpc version=3 Sep 10 23:29:49.766438 containerd[1502]: time="2025-09-10T23:29:49.766391427Z" level=info msg="Container bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:29:49.772077 containerd[1502]: time="2025-09-10T23:29:49.772035675Z" level=info msg="CreateContainer within sandbox \"8e3fe063b3caecde0c083e14672a8507cc920db4d259ad5e14a8ecc6f28b0f02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d\"" Sep 10 23:29:49.772720 containerd[1502]: time="2025-09-10T23:29:49.772692276Z" level=info msg="StartContainer for \"bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d\"" Sep 10 23:29:49.773494 containerd[1502]: time="2025-09-10T23:29:49.773461798Z" level=info msg="connecting to shim bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d" address="unix:///run/containerd/s/213759d25bbce4c87e35c12ce50d24f87b6f8fbda02787951d6adf2e591a481d" protocol=ttrpc version=3 Sep 10 23:29:49.786664 systemd[1]: Started cri-containerd-ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186.scope - libcontainer container ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186. Sep 10 23:29:49.790501 systemd[1]: Started cri-containerd-bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d.scope - libcontainer container bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d. Sep 10 23:29:49.818780 containerd[1502]: time="2025-09-10T23:29:49.818680788Z" level=info msg="StartContainer for \"ef057e80ff2464fde9d2ad0a7ce1ffec6ce9a11bf7cce37c1b7714b8bff3b186\" returns successfully" Sep 10 23:29:49.825942 containerd[1502]: time="2025-09-10T23:29:49.825856119Z" level=info msg="StartContainer for \"bdeac17371043dbc6ce1a0d82349405f9bd2de82a9e9b9b589a2f6f97e98309d\" returns successfully" Sep 10 23:29:49.945953 kubelet[2625]: E0910 23:29:49.945736 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:49.948245 kubelet[2625]: E0910 23:29:49.948222 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:49.960344 kubelet[2625]: I0910 23:29:49.960279 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ktd6s" podStartSLOduration=35.960264327 podStartE2EDuration="35.960264327s" podCreationTimestamp="2025-09-10 23:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:49.960060766 +0000 UTC m=+44.244123154" watchObservedRunningTime="2025-09-10 23:29:49.960264327 +0000 UTC m=+44.244326715" Sep 10 23:29:50.023267 kubelet[2625]: I0910 23:29:50.023210 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-srnrg" podStartSLOduration=36.023192263 podStartE2EDuration="36.023192263s" podCreationTimestamp="2025-09-10 23:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:29:50.022442262 +0000 UTC m=+44.306504650" watchObservedRunningTime="2025-09-10 23:29:50.023192263 +0000 UTC m=+44.307254651" Sep 10 23:29:50.649066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091038962.mount: Deactivated successfully. Sep 10 23:29:50.950462 kubelet[2625]: E0910 23:29:50.950335 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:50.950849 kubelet[2625]: E0910 23:29:50.950761 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:50.975644 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:47786.service - OpenSSH per-connection server daemon (10.0.0.1:47786). Sep 10 23:29:51.040519 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 47786 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:51.044250 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:51.049684 systemd-logind[1485]: New session 11 of user core. Sep 10 23:29:51.060774 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:29:51.197318 sshd[4032]: Connection closed by 10.0.0.1 port 47786 Sep 10 23:29:51.198223 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:51.210066 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:47786.service: Deactivated successfully. Sep 10 23:29:51.211526 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:29:51.212560 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:29:51.216687 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:47794.service - OpenSSH per-connection server daemon (10.0.0.1:47794). Sep 10 23:29:51.217520 systemd-logind[1485]: Removed session 11. Sep 10 23:29:51.267808 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 47794 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:51.269057 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:51.273518 systemd-logind[1485]: New session 12 of user core. Sep 10 23:29:51.283571 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:29:51.443290 sshd[4052]: Connection closed by 10.0.0.1 port 47794 Sep 10 23:29:51.443629 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:51.455934 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:47794.service: Deactivated successfully. Sep 10 23:29:51.458122 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:29:51.460831 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:29:51.465701 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:47808.service - OpenSSH per-connection server daemon (10.0.0.1:47808). Sep 10 23:29:51.466649 systemd-logind[1485]: Removed session 12. Sep 10 23:29:51.518024 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 47808 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:51.519249 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:51.523728 systemd-logind[1485]: New session 13 of user core. Sep 10 23:29:51.529631 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:29:51.635844 sshd[4067]: Connection closed by 10.0.0.1 port 47808 Sep 10 23:29:51.636171 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:51.639093 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:47808.service: Deactivated successfully. Sep 10 23:29:51.641049 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:29:51.643620 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:29:51.644647 systemd-logind[1485]: Removed session 13. Sep 10 23:29:51.952296 kubelet[2625]: E0910 23:29:51.952241 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:51.952715 kubelet[2625]: E0910 23:29:51.952247 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:29:56.655189 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:47822.service - OpenSSH per-connection server daemon (10.0.0.1:47822). Sep 10 23:29:56.697591 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 47822 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:29:56.699358 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:29:56.703220 systemd-logind[1485]: New session 14 of user core. Sep 10 23:29:56.710586 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:29:56.821722 sshd[4084]: Connection closed by 10.0.0.1 port 47822 Sep 10 23:29:56.822220 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Sep 10 23:29:56.827173 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:47822.service: Deactivated successfully. Sep 10 23:29:56.828786 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:29:56.829726 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:29:56.831404 systemd-logind[1485]: Removed session 14. Sep 10 23:30:01.833528 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). Sep 10 23:30:01.890983 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:01.892128 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:01.896617 systemd-logind[1485]: New session 15 of user core. Sep 10 23:30:01.910600 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:30:02.031351 sshd[4101]: Connection closed by 10.0.0.1 port 36774 Sep 10 23:30:02.031864 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:02.042366 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:36774.service: Deactivated successfully. Sep 10 23:30:02.044864 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:30:02.046687 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:30:02.048333 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:36778.service - OpenSSH per-connection server daemon (10.0.0.1:36778). Sep 10 23:30:02.049876 systemd-logind[1485]: Removed session 15. Sep 10 23:30:02.098837 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:02.100354 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:02.104495 systemd-logind[1485]: New session 16 of user core. Sep 10 23:30:02.117569 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:30:02.287101 sshd[4118]: Connection closed by 10.0.0.1 port 36778 Sep 10 23:30:02.288730 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:02.298490 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:36778.service: Deactivated successfully. Sep 10 23:30:02.303249 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:30:02.304718 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:30:02.308917 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:36780.service - OpenSSH per-connection server daemon (10.0.0.1:36780). Sep 10 23:30:02.309913 systemd-logind[1485]: Removed session 16. Sep 10 23:30:02.370596 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 36780 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:02.371618 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:02.375826 systemd-logind[1485]: New session 17 of user core. Sep 10 23:30:02.392588 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:30:03.032058 sshd[4133]: Connection closed by 10.0.0.1 port 36780 Sep 10 23:30:03.031651 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:03.039543 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:36780.service: Deactivated successfully. Sep 10 23:30:03.041831 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:30:03.043907 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:30:03.049292 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). Sep 10 23:30:03.053608 systemd-logind[1485]: Removed session 17. Sep 10 23:30:03.106452 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:03.107726 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:03.111585 systemd-logind[1485]: New session 18 of user core. Sep 10 23:30:03.126612 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:30:03.346747 sshd[4155]: Connection closed by 10.0.0.1 port 36792 Sep 10 23:30:03.347317 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:03.357736 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:36792.service: Deactivated successfully. Sep 10 23:30:03.360202 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:30:03.363192 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:30:03.367618 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:36798.service - OpenSSH per-connection server daemon (10.0.0.1:36798). Sep 10 23:30:03.368131 systemd-logind[1485]: Removed session 18. Sep 10 23:30:03.433416 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 36798 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:03.434568 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:03.439422 systemd-logind[1485]: New session 19 of user core. Sep 10 23:30:03.448608 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:30:03.558222 sshd[4170]: Connection closed by 10.0.0.1 port 36798 Sep 10 23:30:03.558538 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:03.562166 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:36798.service: Deactivated successfully. Sep 10 23:30:03.564804 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:30:03.565523 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:30:03.566723 systemd-logind[1485]: Removed session 19. Sep 10 23:30:08.574223 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:36808.service - OpenSSH per-connection server daemon (10.0.0.1:36808). Sep 10 23:30:08.623274 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 36808 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:08.624550 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:08.629044 systemd-logind[1485]: New session 20 of user core. Sep 10 23:30:08.639632 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:30:08.765727 sshd[4190]: Connection closed by 10.0.0.1 port 36808 Sep 10 23:30:08.765617 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:08.768930 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:36808.service: Deactivated successfully. Sep 10 23:30:08.772040 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:30:08.773316 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:30:08.774532 systemd-logind[1485]: Removed session 20. Sep 10 23:30:13.780318 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:43612.service - OpenSSH per-connection server daemon (10.0.0.1:43612). Sep 10 23:30:13.832803 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 43612 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:13.833879 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:13.837501 systemd-logind[1485]: New session 21 of user core. Sep 10 23:30:13.844570 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:30:13.954193 sshd[4206]: Connection closed by 10.0.0.1 port 43612 Sep 10 23:30:13.954722 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:13.958208 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:43612.service: Deactivated successfully. Sep 10 23:30:13.959763 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:30:13.960351 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:30:13.961407 systemd-logind[1485]: Removed session 21. Sep 10 23:30:18.970621 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:43622.service - OpenSSH per-connection server daemon (10.0.0.1:43622). Sep 10 23:30:19.024446 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 43622 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:19.025270 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:19.030537 systemd-logind[1485]: New session 22 of user core. Sep 10 23:30:19.038601 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:30:19.146782 sshd[4224]: Connection closed by 10.0.0.1 port 43622 Sep 10 23:30:19.147107 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:19.164787 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:43622.service: Deactivated successfully. Sep 10 23:30:19.166497 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:30:19.167217 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:30:19.169706 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Sep 10 23:30:19.170531 systemd-logind[1485]: Removed session 22. Sep 10 23:30:19.225090 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:19.226543 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:19.230392 systemd-logind[1485]: New session 23 of user core. Sep 10 23:30:19.243578 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:30:20.791508 kubelet[2625]: E0910 23:30:20.791472 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:21.437902 containerd[1502]: time="2025-09-10T23:30:21.437829119Z" level=info msg="StopContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" with timeout 30 (s)" Sep 10 23:30:21.442381 containerd[1502]: time="2025-09-10T23:30:21.442340541Z" level=info msg="Stop container \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" with signal terminated" Sep 10 23:30:21.455288 systemd[1]: cri-containerd-f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a.scope: Deactivated successfully. Sep 10 23:30:21.459045 containerd[1502]: time="2025-09-10T23:30:21.458956526Z" level=info msg="received exit event container_id:\"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" id:\"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" pid:3193 exited_at:{seconds:1757547021 nanos:458104355}" Sep 10 23:30:21.459751 containerd[1502]: time="2025-09-10T23:30:21.459723177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" id:\"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" pid:3193 exited_at:{seconds:1757547021 nanos:458104355}" Sep 10 23:30:21.476733 containerd[1502]: time="2025-09-10T23:30:21.476693647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" id:\"a6d9764c28fb38ba9e45daa484309a3fa23fa3879893513a8d7d0cdea71d0fbf\" pid:4268 exited_at:{seconds:1757547021 nanos:476463604}" Sep 10 23:30:21.478609 containerd[1502]: time="2025-09-10T23:30:21.478555592Z" level=info msg="StopContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" with timeout 2 (s)" Sep 10 23:30:21.479245 containerd[1502]: time="2025-09-10T23:30:21.479183121Z" level=info msg="Stop container \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" with signal terminated" Sep 10 23:30:21.482132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a-rootfs.mount: Deactivated successfully. Sep 10 23:30:21.485606 systemd-networkd[1424]: lxc_health: Link DOWN Sep 10 23:30:21.485612 systemd-networkd[1424]: lxc_health: Lost carrier Sep 10 23:30:21.489745 containerd[1502]: time="2025-09-10T23:30:21.489697303Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:30:21.495234 containerd[1502]: time="2025-09-10T23:30:21.495198938Z" level=info msg="StopContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" returns successfully" Sep 10 23:30:21.497872 containerd[1502]: time="2025-09-10T23:30:21.497832014Z" level=info msg="StopPodSandbox for \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\"" Sep 10 23:30:21.502880 systemd[1]: cri-containerd-fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa.scope: Deactivated successfully. Sep 10 23:30:21.503174 systemd[1]: cri-containerd-fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa.scope: Consumed 6.189s CPU time, 122.6M memory peak, 200K read from disk, 12.9M written to disk. Sep 10 23:30:21.504284 containerd[1502]: time="2025-09-10T23:30:21.504251821Z" level=info msg="Container to stop \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.504411 containerd[1502]: time="2025-09-10T23:30:21.504383823Z" level=info msg="received exit event container_id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" pid:3323 exited_at:{seconds:1757547021 nanos:504094659}" Sep 10 23:30:21.506689 containerd[1502]: time="2025-09-10T23:30:21.504475064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" id:\"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" pid:3323 exited_at:{seconds:1757547021 nanos:504094659}" Sep 10 23:30:21.513455 systemd[1]: cri-containerd-72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24.scope: Deactivated successfully. Sep 10 23:30:21.514274 containerd[1502]: time="2025-09-10T23:30:21.514113195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" id:\"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" pid:2833 exit_status:137 exited_at:{seconds:1757547021 nanos:513770430}" Sep 10 23:30:21.526126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa-rootfs.mount: Deactivated successfully. Sep 10 23:30:21.536934 containerd[1502]: time="2025-09-10T23:30:21.536812383Z" level=info msg="StopContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" returns successfully" Sep 10 23:30:21.537538 containerd[1502]: time="2025-09-10T23:30:21.537456032Z" level=info msg="StopPodSandbox for \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\"" Sep 10 23:30:21.537538 containerd[1502]: time="2025-09-10T23:30:21.537532593Z" level=info msg="Container to stop \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.537624 containerd[1502]: time="2025-09-10T23:30:21.537544993Z" level=info msg="Container to stop \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.537624 containerd[1502]: time="2025-09-10T23:30:21.537553393Z" level=info msg="Container to stop \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.537624 containerd[1502]: time="2025-09-10T23:30:21.537564113Z" level=info msg="Container to stop \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.537624 containerd[1502]: time="2025-09-10T23:30:21.537571953Z" level=info msg="Container to stop \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:30:21.540092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24-rootfs.mount: Deactivated successfully. Sep 10 23:30:21.544490 systemd[1]: cri-containerd-bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5.scope: Deactivated successfully. Sep 10 23:30:21.549478 containerd[1502]: time="2025-09-10T23:30:21.548822306Z" level=info msg="shim disconnected" id=72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24 namespace=k8s.io Sep 10 23:30:21.549478 containerd[1502]: time="2025-09-10T23:30:21.548853386Z" level=warning msg="cleaning up after shim disconnected" id=72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24 namespace=k8s.io Sep 10 23:30:21.549478 containerd[1502]: time="2025-09-10T23:30:21.548953348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:30:21.573230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5-rootfs.mount: Deactivated successfully. Sep 10 23:30:21.577365 containerd[1502]: time="2025-09-10T23:30:21.577208011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" id:\"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" pid:2793 exit_status:137 exited_at:{seconds:1757547021 nanos:546618076}" Sep 10 23:30:21.577530 containerd[1502]: time="2025-09-10T23:30:21.577422414Z" level=info msg="received exit event sandbox_id:\"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" exit_status:137 exited_at:{seconds:1757547021 nanos:513770430}" Sep 10 23:30:21.577885 containerd[1502]: time="2025-09-10T23:30:21.577850220Z" level=info msg="TearDown network for sandbox \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" successfully" Sep 10 23:30:21.577885 containerd[1502]: time="2025-09-10T23:30:21.577875500Z" level=info msg="StopPodSandbox for \"72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24\" returns successfully" Sep 10 23:30:21.579256 containerd[1502]: time="2025-09-10T23:30:21.579222238Z" level=info msg="received exit event sandbox_id:\"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" exit_status:137 exited_at:{seconds:1757547021 nanos:546618076}" Sep 10 23:30:21.579379 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72aa53131f21048ad02ccab6fae70f942312bb345fca17b62b2f7e89b7963d24-shm.mount: Deactivated successfully. Sep 10 23:30:21.579624 containerd[1502]: time="2025-09-10T23:30:21.579474682Z" level=info msg="TearDown network for sandbox \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" successfully" Sep 10 23:30:21.579624 containerd[1502]: time="2025-09-10T23:30:21.579494522Z" level=info msg="StopPodSandbox for \"bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5\" returns successfully" Sep 10 23:30:21.580151 containerd[1502]: time="2025-09-10T23:30:21.580127131Z" level=info msg="shim disconnected" id=bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5 namespace=k8s.io Sep 10 23:30:21.580291 containerd[1502]: time="2025-09-10T23:30:21.580151731Z" level=warning msg="cleaning up after shim disconnected" id=bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5 namespace=k8s.io Sep 10 23:30:21.580291 containerd[1502]: time="2025-09-10T23:30:21.580178051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687162 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-xtables-lock\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687205 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-net\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687223 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-kernel\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687238 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-etc-cni-netd\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687262 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349b8366-ee22-4ec2-9ae9-12cc4ab43318-cilium-config-path\") pod \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\" (UID: \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\") " Sep 10 23:30:21.687453 kubelet[2625]: I0910 23:30:21.687280 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-hubble-tls\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687296 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfz4v\" (UniqueName: \"kubernetes.io/projected/349b8366-ee22-4ec2-9ae9-12cc4ab43318-kube-api-access-tfz4v\") pod \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\" (UID: \"349b8366-ee22-4ec2-9ae9-12cc4ab43318\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687313 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cab590-19c9-419a-af1c-564072054707-cilium-config-path\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687328 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs469\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-kube-api-access-bs469\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687345 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-bpf-maps\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687384 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01cab590-19c9-419a-af1c-564072054707-clustermesh-secrets\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687726 kubelet[2625]: I0910 23:30:21.687399 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-hostproc\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.687849 kubelet[2625]: I0910 23:30:21.687416 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-lib-modules\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.688040 kubelet[2625]: I0910 23:30:21.687892 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-run\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.688040 kubelet[2625]: I0910 23:30:21.687919 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cni-path\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.688040 kubelet[2625]: I0910 23:30:21.687933 2625 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-cgroup\") pod \"01cab590-19c9-419a-af1c-564072054707\" (UID: \"01cab590-19c9-419a-af1c-564072054707\") " Sep 10 23:30:21.690840 kubelet[2625]: I0910 23:30:21.690804 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.690890 kubelet[2625]: I0910 23:30:21.690864 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.690890 kubelet[2625]: I0910 23:30:21.690884 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.690950 kubelet[2625]: I0910 23:30:21.690900 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-hostproc" (OuterVolumeSpecName: "hostproc") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.690950 kubelet[2625]: I0910 23:30:21.690913 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.691400 kubelet[2625]: I0910 23:30:21.691269 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.691400 kubelet[2625]: I0910 23:30:21.691307 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.691400 kubelet[2625]: I0910 23:30:21.691322 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.691400 kubelet[2625]: I0910 23:30:21.691336 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cni-path" (OuterVolumeSpecName: "cni-path") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.691400 kubelet[2625]: I0910 23:30:21.691352 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:30:21.692965 kubelet[2625]: I0910 23:30:21.692919 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349b8366-ee22-4ec2-9ae9-12cc4ab43318-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "349b8366-ee22-4ec2-9ae9-12cc4ab43318" (UID: "349b8366-ee22-4ec2-9ae9-12cc4ab43318"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:30:21.695062 kubelet[2625]: I0910 23:30:21.695028 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:30:21.695411 kubelet[2625]: I0910 23:30:21.695367 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-kube-api-access-bs469" (OuterVolumeSpecName: "kube-api-access-bs469") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "kube-api-access-bs469". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:30:21.696577 kubelet[2625]: I0910 23:30:21.696514 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349b8366-ee22-4ec2-9ae9-12cc4ab43318-kube-api-access-tfz4v" (OuterVolumeSpecName: "kube-api-access-tfz4v") pod "349b8366-ee22-4ec2-9ae9-12cc4ab43318" (UID: "349b8366-ee22-4ec2-9ae9-12cc4ab43318"). InnerVolumeSpecName "kube-api-access-tfz4v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:30:21.697038 kubelet[2625]: I0910 23:30:21.697010 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cab590-19c9-419a-af1c-564072054707-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:30:21.698701 kubelet[2625]: I0910 23:30:21.698669 2625 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cab590-19c9-419a-af1c-564072054707-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01cab590-19c9-419a-af1c-564072054707" (UID: "01cab590-19c9-419a-af1c-564072054707"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788339 2625 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788378 2625 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788387 2625 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788396 2625 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349b8366-ee22-4ec2-9ae9-12cc4ab43318-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788405 2625 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788412 2625 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tfz4v\" (UniqueName: \"kubernetes.io/projected/349b8366-ee22-4ec2-9ae9-12cc4ab43318-kube-api-access-tfz4v\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788399 kubelet[2625]: I0910 23:30:21.788420 2625 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cab590-19c9-419a-af1c-564072054707-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788453 2625 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bs469\" (UniqueName: \"kubernetes.io/projected/01cab590-19c9-419a-af1c-564072054707-kube-api-access-bs469\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788464 2625 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788472 2625 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01cab590-19c9-419a-af1c-564072054707-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788480 2625 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788488 2625 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788495 2625 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788502 2625 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788720 kubelet[2625]: I0910 23:30:21.788509 2625 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.788869 kubelet[2625]: I0910 23:30:21.788517 2625 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01cab590-19c9-419a-af1c-564072054707-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 23:30:21.799286 systemd[1]: Removed slice kubepods-besteffort-pod349b8366_ee22_4ec2_9ae9_12cc4ab43318.slice - libcontainer container kubepods-besteffort-pod349b8366_ee22_4ec2_9ae9_12cc4ab43318.slice. Sep 10 23:30:21.801075 systemd[1]: Removed slice kubepods-burstable-pod01cab590_19c9_419a_af1c_564072054707.slice - libcontainer container kubepods-burstable-pod01cab590_19c9_419a_af1c_564072054707.slice. Sep 10 23:30:21.801191 systemd[1]: kubepods-burstable-pod01cab590_19c9_419a_af1c_564072054707.slice: Consumed 6.284s CPU time, 122.9M memory peak, 1M read from disk, 12.9M written to disk. Sep 10 23:30:22.012366 kubelet[2625]: I0910 23:30:22.010576 2625 scope.go:117] "RemoveContainer" containerID="f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a" Sep 10 23:30:22.015454 containerd[1502]: time="2025-09-10T23:30:22.015407753Z" level=info msg="RemoveContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\"" Sep 10 23:30:22.022620 containerd[1502]: time="2025-09-10T23:30:22.022583208Z" level=info msg="RemoveContainer for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" returns successfully" Sep 10 23:30:22.023444 kubelet[2625]: I0910 23:30:22.022804 2625 scope.go:117] "RemoveContainer" containerID="f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a" Sep 10 23:30:22.023527 containerd[1502]: time="2025-09-10T23:30:22.023002334Z" level=error msg="ContainerStatus for \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\": not found" Sep 10 23:30:22.024200 kubelet[2625]: E0910 23:30:22.024036 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\": not found" containerID="f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a" Sep 10 23:30:22.035957 kubelet[2625]: I0910 23:30:22.035831 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a"} err="failed to get container status \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3039cee9b6266298d16ae0e651784b4625e86d30a2182c428e1b0e3f822240a\": not found" Sep 10 23:30:22.035957 kubelet[2625]: I0910 23:30:22.035956 2625 scope.go:117] "RemoveContainer" containerID="fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa" Sep 10 23:30:22.039350 containerd[1502]: time="2025-09-10T23:30:22.039309389Z" level=info msg="RemoveContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\"" Sep 10 23:30:22.047551 containerd[1502]: time="2025-09-10T23:30:22.047495418Z" level=info msg="RemoveContainer for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" returns successfully" Sep 10 23:30:22.047848 kubelet[2625]: I0910 23:30:22.047812 2625 scope.go:117] "RemoveContainer" containerID="9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f" Sep 10 23:30:22.050559 containerd[1502]: time="2025-09-10T23:30:22.050528498Z" level=info msg="RemoveContainer for \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\"" Sep 10 23:30:22.057909 containerd[1502]: time="2025-09-10T23:30:22.057859075Z" level=info msg="RemoveContainer for \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" returns successfully" Sep 10 23:30:22.058669 kubelet[2625]: I0910 23:30:22.058581 2625 scope.go:117] "RemoveContainer" containerID="e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b" Sep 10 23:30:22.060703 containerd[1502]: time="2025-09-10T23:30:22.060679312Z" level=info msg="RemoveContainer for \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\"" Sep 10 23:30:22.064044 containerd[1502]: time="2025-09-10T23:30:22.064004156Z" level=info msg="RemoveContainer for \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" returns successfully" Sep 10 23:30:22.064196 kubelet[2625]: I0910 23:30:22.064164 2625 scope.go:117] "RemoveContainer" containerID="13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67" Sep 10 23:30:22.065658 containerd[1502]: time="2025-09-10T23:30:22.065634377Z" level=info msg="RemoveContainer for \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\"" Sep 10 23:30:22.071162 containerd[1502]: time="2025-09-10T23:30:22.071118010Z" level=info msg="RemoveContainer for \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" returns successfully" Sep 10 23:30:22.071372 kubelet[2625]: I0910 23:30:22.071331 2625 scope.go:117] "RemoveContainer" containerID="14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5" Sep 10 23:30:22.073033 containerd[1502]: time="2025-09-10T23:30:22.072969115Z" level=info msg="RemoveContainer for \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\"" Sep 10 23:30:22.075373 containerd[1502]: time="2025-09-10T23:30:22.075349786Z" level=info msg="RemoveContainer for \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" returns successfully" Sep 10 23:30:22.075530 kubelet[2625]: I0910 23:30:22.075502 2625 scope.go:117] "RemoveContainer" containerID="fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa" Sep 10 23:30:22.075741 containerd[1502]: time="2025-09-10T23:30:22.075708671Z" level=error msg="ContainerStatus for \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\": not found" Sep 10 23:30:22.075944 kubelet[2625]: E0910 23:30:22.075919 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\": not found" containerID="fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa" Sep 10 23:30:22.075988 kubelet[2625]: I0910 23:30:22.075951 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa"} err="failed to get container status \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbf0cdeea9d81c642be3b6a6e1b92bc442a906f357b8e300133865d275c094fa\": not found" Sep 10 23:30:22.075988 kubelet[2625]: I0910 23:30:22.075971 2625 scope.go:117] "RemoveContainer" containerID="9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f" Sep 10 23:30:22.076157 containerd[1502]: time="2025-09-10T23:30:22.076125076Z" level=error msg="ContainerStatus for \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\": not found" Sep 10 23:30:22.076254 kubelet[2625]: E0910 23:30:22.076234 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\": not found" containerID="9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f" Sep 10 23:30:22.076293 kubelet[2625]: I0910 23:30:22.076260 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f"} err="failed to get container status \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9857d79edbc795d8d39ef1841beb5f7db05e3c0a8f2d1fbdd7493362e2b7b16f\": not found" Sep 10 23:30:22.076293 kubelet[2625]: I0910 23:30:22.076277 2625 scope.go:117] "RemoveContainer" containerID="e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b" Sep 10 23:30:22.076540 containerd[1502]: time="2025-09-10T23:30:22.076485041Z" level=error msg="ContainerStatus for \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\": not found" Sep 10 23:30:22.076656 kubelet[2625]: E0910 23:30:22.076632 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\": not found" containerID="e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b" Sep 10 23:30:22.076687 kubelet[2625]: I0910 23:30:22.076668 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b"} err="failed to get container status \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0b05a27a37d001ddfa9050a2900d60a1e4a8490f10607c77dbea965697c313b\": not found" Sep 10 23:30:22.076719 kubelet[2625]: I0910 23:30:22.076689 2625 scope.go:117] "RemoveContainer" containerID="13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67" Sep 10 23:30:22.082437 containerd[1502]: time="2025-09-10T23:30:22.082364479Z" level=error msg="ContainerStatus for \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\": not found" Sep 10 23:30:22.082631 kubelet[2625]: E0910 23:30:22.082577 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\": not found" containerID="13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67" Sep 10 23:30:22.082631 kubelet[2625]: I0910 23:30:22.082607 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67"} err="failed to get container status \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\": rpc error: code = NotFound desc = an error occurred when try to find container \"13b7f20f5762927990d79da13622cdbd4443cc962b46806ab10b75e3c4da0a67\": not found" Sep 10 23:30:22.082631 kubelet[2625]: I0910 23:30:22.082625 2625 scope.go:117] "RemoveContainer" containerID="14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5" Sep 10 23:30:22.082973 containerd[1502]: time="2025-09-10T23:30:22.082940886Z" level=error msg="ContainerStatus for \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\": not found" Sep 10 23:30:22.083220 kubelet[2625]: E0910 23:30:22.083196 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\": not found" containerID="14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5" Sep 10 23:30:22.083274 kubelet[2625]: I0910 23:30:22.083225 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5"} err="failed to get container status \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"14c93f842664d437ac4bf51fd37c584f976365e1ce0618addb02d42cda37a4c5\": not found" Sep 10 23:30:22.481226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbf1c5f21703bb37c4f32ebf8180e5ae15f44c5d3c7092b7dfcdebaf196b9ec5-shm.mount: Deactivated successfully. Sep 10 23:30:22.481327 systemd[1]: var-lib-kubelet-pods-349b8366\x2dee22\x2d4ec2\x2d9ae9\x2d12cc4ab43318-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtfz4v.mount: Deactivated successfully. Sep 10 23:30:22.481387 systemd[1]: var-lib-kubelet-pods-01cab590\x2d19c9\x2d419a\x2daf1c\x2d564072054707-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbs469.mount: Deactivated successfully. Sep 10 23:30:22.481456 systemd[1]: var-lib-kubelet-pods-01cab590\x2d19c9\x2d419a\x2daf1c\x2d564072054707-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 23:30:22.481512 systemd[1]: var-lib-kubelet-pods-01cab590\x2d19c9\x2d419a\x2daf1c\x2d564072054707-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 23:30:22.791673 kubelet[2625]: E0910 23:30:22.791639 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:23.405844 sshd[4240]: Connection closed by 10.0.0.1 port 43628 Sep 10 23:30:23.406483 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:23.421974 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:43628.service: Deactivated successfully. Sep 10 23:30:23.423976 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:30:23.424221 systemd[1]: session-23.scope: Consumed 1.536s CPU time, 24.1M memory peak. Sep 10 23:30:23.424806 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:30:23.428071 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:45564.service - OpenSSH per-connection server daemon (10.0.0.1:45564). Sep 10 23:30:23.428900 systemd-logind[1485]: Removed session 23. Sep 10 23:30:23.482393 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:23.483632 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:23.487793 systemd-logind[1485]: New session 24 of user core. Sep 10 23:30:23.498611 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:30:23.793771 kubelet[2625]: I0910 23:30:23.793733 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cab590-19c9-419a-af1c-564072054707" path="/var/lib/kubelet/pods/01cab590-19c9-419a-af1c-564072054707/volumes" Sep 10 23:30:23.794287 kubelet[2625]: I0910 23:30:23.794259 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349b8366-ee22-4ec2-9ae9-12cc4ab43318" path="/var/lib/kubelet/pods/349b8366-ee22-4ec2-9ae9-12cc4ab43318/volumes" Sep 10 23:30:24.657810 sshd[4394]: Connection closed by 10.0.0.1 port 45564 Sep 10 23:30:24.658493 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:24.671424 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:45564.service: Deactivated successfully. Sep 10 23:30:24.674408 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:30:24.674853 systemd[1]: session-24.scope: Consumed 1.061s CPU time, 23.8M memory peak. Sep 10 23:30:24.678303 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:30:24.680698 kubelet[2625]: I0910 23:30:24.680660 2625 memory_manager.go:355] "RemoveStaleState removing state" podUID="01cab590-19c9-419a-af1c-564072054707" containerName="cilium-agent" Sep 10 23:30:24.680698 kubelet[2625]: I0910 23:30:24.680687 2625 memory_manager.go:355] "RemoveStaleState removing state" podUID="349b8366-ee22-4ec2-9ae9-12cc4ab43318" containerName="cilium-operator" Sep 10 23:30:24.693405 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:45570.service - OpenSSH per-connection server daemon (10.0.0.1:45570). Sep 10 23:30:24.694904 systemd-logind[1485]: Removed session 24. Sep 10 23:30:24.713289 systemd[1]: Created slice kubepods-burstable-pod078ba6be_43ea_4b80_9496_ccc47ed0f3d6.slice - libcontainer container kubepods-burstable-pod078ba6be_43ea_4b80_9496_ccc47ed0f3d6.slice. Sep 10 23:30:24.749297 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 45570 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:24.750479 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:24.754481 systemd-logind[1485]: New session 25 of user core. Sep 10 23:30:24.765626 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:30:24.805524 kubelet[2625]: I0910 23:30:24.805417 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-bpf-maps\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805524 kubelet[2625]: I0910 23:30:24.805499 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d969x\" (UniqueName: \"kubernetes.io/projected/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-kube-api-access-d969x\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805524 kubelet[2625]: I0910 23:30:24.805521 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-cilium-run\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805549 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-clustermesh-secrets\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805568 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-hubble-tls\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805607 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-hostproc\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805628 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-lib-modules\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805643 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-xtables-lock\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805871 kubelet[2625]: I0910 23:30:24.805685 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-host-proc-sys-kernel\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805706 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-cilium-cgroup\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805720 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-etc-cni-netd\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805761 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-host-proc-sys-net\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805781 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-cni-path\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805797 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-cilium-config-path\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.805989 kubelet[2625]: I0910 23:30:24.805837 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/078ba6be-43ea-4b80-9496-ccc47ed0f3d6-cilium-ipsec-secrets\") pod \"cilium-pc644\" (UID: \"078ba6be-43ea-4b80-9496-ccc47ed0f3d6\") " pod="kube-system/cilium-pc644" Sep 10 23:30:24.814013 sshd[4409]: Connection closed by 10.0.0.1 port 45570 Sep 10 23:30:24.814497 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:24.833261 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:45570.service: Deactivated successfully. Sep 10 23:30:24.835049 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:30:24.836042 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:30:24.838161 systemd-logind[1485]: Removed session 25. Sep 10 23:30:24.839799 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Sep 10 23:30:24.893414 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:30:24.894649 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:30:24.899178 systemd-logind[1485]: New session 26 of user core. Sep 10 23:30:24.912496 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 23:30:25.018408 kubelet[2625]: E0910 23:30:25.018360 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:25.019088 containerd[1502]: time="2025-09-10T23:30:25.018877637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc644,Uid:078ba6be-43ea-4b80-9496-ccc47ed0f3d6,Namespace:kube-system,Attempt:0,}" Sep 10 23:30:25.041222 containerd[1502]: time="2025-09-10T23:30:25.041128110Z" level=info msg="connecting to shim db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:30:25.065577 systemd[1]: Started cri-containerd-db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9.scope - libcontainer container db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9. Sep 10 23:30:25.084125 containerd[1502]: time="2025-09-10T23:30:25.084089477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc644,Uid:078ba6be-43ea-4b80-9496-ccc47ed0f3d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\"" Sep 10 23:30:25.084712 kubelet[2625]: E0910 23:30:25.084687 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:25.087575 containerd[1502]: time="2025-09-10T23:30:25.087419558Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:30:25.095477 containerd[1502]: time="2025-09-10T23:30:25.095442856Z" level=info msg="Container 32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:30:25.100640 containerd[1502]: time="2025-09-10T23:30:25.100606559Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\"" Sep 10 23:30:25.101099 containerd[1502]: time="2025-09-10T23:30:25.101074925Z" level=info msg="StartContainer for \"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\"" Sep 10 23:30:25.101847 containerd[1502]: time="2025-09-10T23:30:25.101823494Z" level=info msg="connecting to shim 32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" protocol=ttrpc version=3 Sep 10 23:30:25.119701 systemd[1]: Started cri-containerd-32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7.scope - libcontainer container 32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7. Sep 10 23:30:25.145444 containerd[1502]: time="2025-09-10T23:30:25.145329548Z" level=info msg="StartContainer for \"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\" returns successfully" Sep 10 23:30:25.153234 systemd[1]: cri-containerd-32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7.scope: Deactivated successfully. Sep 10 23:30:25.155322 containerd[1502]: time="2025-09-10T23:30:25.155287350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\" id:\"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\" pid:4487 exited_at:{seconds:1757547025 nanos:154965026}" Sep 10 23:30:25.155536 containerd[1502]: time="2025-09-10T23:30:25.155337710Z" level=info msg="received exit event container_id:\"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\" id:\"32ab2246fa3fe840d6f22aecb9139a23f18d44ea5a736448e1a640be5abd2de7\" pid:4487 exited_at:{seconds:1757547025 nanos:154965026}" Sep 10 23:30:25.863850 kubelet[2625]: E0910 23:30:25.863799 2625 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:30:26.031808 kubelet[2625]: E0910 23:30:26.031778 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:26.035102 containerd[1502]: time="2025-09-10T23:30:26.035065566Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:30:26.043144 containerd[1502]: time="2025-09-10T23:30:26.043061381Z" level=info msg="Container 48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:30:26.043782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069917372.mount: Deactivated successfully. Sep 10 23:30:26.049659 containerd[1502]: time="2025-09-10T23:30:26.049624340Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\"" Sep 10 23:30:26.050320 containerd[1502]: time="2025-09-10T23:30:26.050298188Z" level=info msg="StartContainer for \"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\"" Sep 10 23:30:26.051935 containerd[1502]: time="2025-09-10T23:30:26.051906687Z" level=info msg="connecting to shim 48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" protocol=ttrpc version=3 Sep 10 23:30:26.073602 systemd[1]: Started cri-containerd-48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a.scope - libcontainer container 48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a. Sep 10 23:30:26.102783 containerd[1502]: time="2025-09-10T23:30:26.102745655Z" level=info msg="StartContainer for \"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\" returns successfully" Sep 10 23:30:26.106925 systemd[1]: cri-containerd-48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a.scope: Deactivated successfully. Sep 10 23:30:26.107379 containerd[1502]: time="2025-09-10T23:30:26.107345910Z" level=info msg="received exit event container_id:\"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\" id:\"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\" pid:4532 exited_at:{seconds:1757547026 nanos:107104587}" Sep 10 23:30:26.107598 containerd[1502]: time="2025-09-10T23:30:26.107572833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\" id:\"48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a\" pid:4532 exited_at:{seconds:1757547026 nanos:107104587}" Sep 10 23:30:26.914237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48d38ec3b80c6b73ff2d18b1d7fcbbfdf259ee27e253938f410ea9348ea5287a-rootfs.mount: Deactivated successfully. Sep 10 23:30:27.037391 kubelet[2625]: E0910 23:30:27.037342 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:27.047315 containerd[1502]: time="2025-09-10T23:30:27.047268577Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:30:27.066452 containerd[1502]: time="2025-09-10T23:30:27.066150677Z" level=info msg="Container 4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:30:27.072748 containerd[1502]: time="2025-09-10T23:30:27.072701073Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\"" Sep 10 23:30:27.073516 containerd[1502]: time="2025-09-10T23:30:27.073220160Z" level=info msg="StartContainer for \"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\"" Sep 10 23:30:27.074512 containerd[1502]: time="2025-09-10T23:30:27.074487694Z" level=info msg="connecting to shim 4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" protocol=ttrpc version=3 Sep 10 23:30:27.111655 systemd[1]: Started cri-containerd-4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562.scope - libcontainer container 4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562. Sep 10 23:30:27.151046 systemd[1]: cri-containerd-4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562.scope: Deactivated successfully. Sep 10 23:30:27.151752 containerd[1502]: time="2025-09-10T23:30:27.151714275Z" level=info msg="received exit event container_id:\"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\" id:\"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\" pid:4576 exited_at:{seconds:1757547027 nanos:151023627}" Sep 10 23:30:27.151968 containerd[1502]: time="2025-09-10T23:30:27.151894957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\" id:\"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\" pid:4576 exited_at:{seconds:1757547027 nanos:151023627}" Sep 10 23:30:27.161763 containerd[1502]: time="2025-09-10T23:30:27.161731272Z" level=info msg="StartContainer for \"4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562\" returns successfully" Sep 10 23:30:27.295865 kubelet[2625]: I0910 23:30:27.295824 2625 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T23:30:27Z","lastTransitionTime":"2025-09-10T23:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 23:30:27.914254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dbb80201bdd1f208bcf3d6cffd3538deb919643fe9281ff4d2525671f1fa562-rootfs.mount: Deactivated successfully. Sep 10 23:30:28.043242 kubelet[2625]: E0910 23:30:28.042801 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:28.047740 containerd[1502]: time="2025-09-10T23:30:28.047691273Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:30:28.060713 containerd[1502]: time="2025-09-10T23:30:28.059378606Z" level=info msg="Container 036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:30:28.066242 containerd[1502]: time="2025-09-10T23:30:28.066202804Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\"" Sep 10 23:30:28.066939 containerd[1502]: time="2025-09-10T23:30:28.066890252Z" level=info msg="StartContainer for \"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\"" Sep 10 23:30:28.068073 containerd[1502]: time="2025-09-10T23:30:28.068006785Z" level=info msg="connecting to shim 036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" protocol=ttrpc version=3 Sep 10 23:30:28.091593 systemd[1]: Started cri-containerd-036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca.scope - libcontainer container 036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca. Sep 10 23:30:28.111338 systemd[1]: cri-containerd-036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca.scope: Deactivated successfully. Sep 10 23:30:28.114522 containerd[1502]: time="2025-09-10T23:30:28.114475313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\" id:\"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\" pid:4615 exited_at:{seconds:1757547028 nanos:113606303}" Sep 10 23:30:28.115960 containerd[1502]: time="2025-09-10T23:30:28.115630087Z" level=info msg="received exit event container_id:\"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\" id:\"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\" pid:4615 exited_at:{seconds:1757547028 nanos:113606303}" Sep 10 23:30:28.122794 containerd[1502]: time="2025-09-10T23:30:28.122633046Z" level=info msg="StartContainer for \"036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca\" returns successfully" Sep 10 23:30:28.124790 containerd[1502]: time="2025-09-10T23:30:28.120702504Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078ba6be_43ea_4b80_9496_ccc47ed0f3d6.slice/cri-containerd-036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca.scope/memory.events\": no such file or directory" Sep 10 23:30:28.135141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036cd3987dc6985918d12adef4f1036f63af43f8165a9b8a55584b78e7f24cca-rootfs.mount: Deactivated successfully. Sep 10 23:30:29.050485 kubelet[2625]: E0910 23:30:29.050451 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:29.056463 containerd[1502]: time="2025-09-10T23:30:29.055300406Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:30:29.066918 containerd[1502]: time="2025-09-10T23:30:29.066167526Z" level=info msg="Container fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:30:29.072501 containerd[1502]: time="2025-09-10T23:30:29.072468596Z" level=info msg="CreateContainer within sandbox \"db437868703545e06afed21f74a7976cd512ed9b062253238234518ef453edd9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\"" Sep 10 23:30:29.073124 containerd[1502]: time="2025-09-10T23:30:29.073043483Z" level=info msg="StartContainer for \"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\"" Sep 10 23:30:29.073857 containerd[1502]: time="2025-09-10T23:30:29.073832212Z" level=info msg="connecting to shim fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061" address="unix:///run/containerd/s/fa4c5831870b8ce35b1c74aff0574f48fe50b73e00614ff7169d935d4df3c92a" protocol=ttrpc version=3 Sep 10 23:30:29.092592 systemd[1]: Started cri-containerd-fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061.scope - libcontainer container fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061. Sep 10 23:30:29.120624 containerd[1502]: time="2025-09-10T23:30:29.120589531Z" level=info msg="StartContainer for \"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" returns successfully" Sep 10 23:30:29.185100 containerd[1502]: time="2025-09-10T23:30:29.185029967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" id:\"9f299fbc2e4f79a78c9d4c095c5dfbd417deb375e3cb6c744882645c46c715e4\" pid:4685 exited_at:{seconds:1757547029 nanos:184731323}" Sep 10 23:30:29.382507 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 23:30:30.057252 kubelet[2625]: E0910 23:30:30.057129 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:30.072684 kubelet[2625]: I0910 23:30:30.072607 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pc644" podStartSLOduration=6.072590646 podStartE2EDuration="6.072590646s" podCreationTimestamp="2025-09-10 23:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:30:30.072013519 +0000 UTC m=+84.356075907" watchObservedRunningTime="2025-09-10 23:30:30.072590646 +0000 UTC m=+84.356653034" Sep 10 23:30:31.059173 kubelet[2625]: E0910 23:30:31.059128 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:31.396735 containerd[1502]: time="2025-09-10T23:30:31.396623135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" id:\"8cb535a1ce4c13108a5bc2abd7a1703fa6542df4829d46d78060902e870e4bab\" pid:4960 exit_status:1 exited_at:{seconds:1757547031 nanos:396187050}" Sep 10 23:30:31.410207 kubelet[2625]: E0910 23:30:31.410153 2625 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52844->127.0.0.1:45281: write tcp 127.0.0.1:52844->127.0.0.1:45281: write: broken pipe Sep 10 23:30:32.231418 systemd-networkd[1424]: lxc_health: Link UP Sep 10 23:30:32.232025 systemd-networkd[1424]: lxc_health: Gained carrier Sep 10 23:30:33.021518 kubelet[2625]: E0910 23:30:33.021459 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:33.062769 kubelet[2625]: E0910 23:30:33.062718 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:33.509510 containerd[1502]: time="2025-09-10T23:30:33.509255989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" id:\"cb43bbf5dd1fde28d3f8fed8139562b672b56ad12bf8bebe5be31fe33e5a9233\" pid:5223 exited_at:{seconds:1757547033 nanos:508966226}" Sep 10 23:30:34.064540 kubelet[2625]: E0910 23:30:34.064384 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:34.084645 systemd-networkd[1424]: lxc_health: Gained IPv6LL Sep 10 23:30:35.651951 containerd[1502]: time="2025-09-10T23:30:35.651872390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" id:\"bfdab501dd6baaa6f44fb48d50ef8fbbd5124fac1729d9e7745adbbe19a00da9\" pid:5249 exited_at:{seconds:1757547035 nanos:651211623}" Sep 10 23:30:35.654698 kubelet[2625]: E0910 23:30:35.654511 2625 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:52852->127.0.0.1:45281: read tcp 127.0.0.1:52852->127.0.0.1:45281: read: connection reset by peer Sep 10 23:30:35.654698 kubelet[2625]: E0910 23:30:35.654639 2625 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52852->127.0.0.1:45281: write tcp 127.0.0.1:52852->127.0.0.1:45281: write: broken pipe Sep 10 23:30:35.793450 kubelet[2625]: E0910 23:30:35.793404 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:30:37.752709 containerd[1502]: time="2025-09-10T23:30:37.752669509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe46bc368f3d8221927f6719275aef29c8ab8d81dca4af8453f4fdb44c7e0061\" id:\"90653057e5b5e2b74ba98f688382ce7d8b563957cdd87fff9efa9ed0b035fa3e\" pid:5280 exited_at:{seconds:1757547037 nanos:751839781}" Sep 10 23:30:37.783887 sshd[4423]: Connection closed by 10.0.0.1 port 45580 Sep 10 23:30:37.784660 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Sep 10 23:30:37.788326 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:45580.service: Deactivated successfully. Sep 10 23:30:37.789998 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 23:30:37.790771 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Sep 10 23:30:37.791895 systemd-logind[1485]: Removed session 26.