Sep 8 23:45:58.794034 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:45:58.794054 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Sep 8 22:10:01 -00 2025 Sep 8 23:45:58.794063 kernel: KASLR enabled Sep 8 23:45:58.794068 kernel: efi: EFI v2.7 by EDK II Sep 8 23:45:58.794074 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 8 23:45:58.794079 kernel: random: crng init done Sep 8 23:45:58.794086 kernel: secureboot: Secure boot disabled Sep 8 23:45:58.794091 kernel: ACPI: Early table checksum verification disabled Sep 8 23:45:58.794097 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 8 23:45:58.794104 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:45:58.794110 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794116 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794121 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794127 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794134 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794142 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794148 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794154 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794160 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:45:58.794166 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:45:58.794172 kernel: ACPI: Use ACPI SPCR as default console: No Sep 8 23:45:58.794178 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:45:58.794184 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 8 23:45:58.794189 kernel: Zone ranges: Sep 8 23:45:58.794195 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:45:58.794202 kernel: DMA32 empty Sep 8 23:45:58.794208 kernel: Normal empty Sep 8 23:45:58.794214 kernel: Device empty Sep 8 23:45:58.794220 kernel: Movable zone start for each node Sep 8 23:45:58.794226 kernel: Early memory node ranges Sep 8 23:45:58.794232 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 8 23:45:58.794238 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 8 23:45:58.794244 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 8 23:45:58.794250 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 8 23:45:58.794256 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 8 23:45:58.794262 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 8 23:45:58.794268 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 8 23:45:58.794276 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 8 23:45:58.794282 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 8 23:45:58.794288 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:45:58.794296 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:45:58.794303 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:45:58.794309 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:45:58.794317 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:45:58.794323 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:45:58.794330 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 8 23:45:58.794336 kernel: psci: probing for conduit method from ACPI. Sep 8 23:45:58.794342 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:45:58.794348 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:45:58.794355 kernel: psci: Trusted OS migration not required Sep 8 23:45:58.794372 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:45:58.794379 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:45:58.794385 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 8 23:45:58.794393 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 8 23:45:58.794400 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:45:58.794406 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:45:58.794413 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:45:58.794419 kernel: CPU features: detected: Spectre-v4 Sep 8 23:45:58.794425 kernel: CPU features: detected: Spectre-BHB Sep 8 23:45:58.794432 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:45:58.794438 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:45:58.794445 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:45:58.794452 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:45:58.794458 kernel: alternatives: applying boot alternatives Sep 8 23:45:58.794466 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=56d35272d6799b20efe64172ddb761aa9d752bf4ee92cd36e6693ce5e7a3b83d Sep 8 23:45:58.794473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:45:58.794480 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:45:58.794487 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:45:58.794493 kernel: Fallback order for Node 0: 0 Sep 8 23:45:58.794500 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 8 23:45:58.794506 kernel: Policy zone: DMA Sep 8 23:45:58.794512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:45:58.794519 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 8 23:45:58.794526 kernel: software IO TLB: area num 4. Sep 8 23:45:58.794532 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 8 23:45:58.794539 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 8 23:45:58.794546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:45:58.794553 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:45:58.794560 kernel: rcu: RCU event tracing is enabled. Sep 8 23:45:58.794566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:45:58.794573 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:45:58.794579 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:45:58.794591 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:45:58.794599 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:45:58.794605 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:45:58.794612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:45:58.794618 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:45:58.794631 kernel: GICv3: 256 SPIs implemented Sep 8 23:45:58.794638 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:45:58.794644 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:45:58.794651 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:45:58.794658 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 8 23:45:58.794666 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:45:58.794672 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:45:58.794679 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:45:58.794688 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:45:58.794696 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 8 23:45:58.794702 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 8 23:45:58.794709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:45:58.794717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:45:58.794724 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:45:58.794731 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:45:58.794738 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:45:58.794746 kernel: arm-pv: using stolen time PV Sep 8 23:45:58.794754 kernel: Console: colour dummy device 80x25 Sep 8 23:45:58.794761 kernel: ACPI: Core revision 20240827 Sep 8 23:45:58.794768 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:45:58.794774 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:45:58.794781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 8 23:45:58.794789 kernel: landlock: Up and running. Sep 8 23:45:58.794795 kernel: SELinux: Initializing. Sep 8 23:45:58.794802 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:45:58.794808 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:45:58.794815 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:45:58.794822 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:45:58.794828 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 8 23:45:58.794835 kernel: Remapping and enabling EFI services. Sep 8 23:45:58.794842 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:45:58.794854 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:45:58.794861 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:45:58.794868 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 8 23:45:58.794876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:45:58.794884 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:45:58.794891 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:45:58.794898 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:45:58.794905 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 8 23:45:58.794914 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:45:58.794921 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:45:58.794928 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:45:58.794939 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:45:58.794946 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 8 23:45:58.794953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:45:58.794960 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:45:58.794967 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:45:58.794974 kernel: SMP: Total of 4 processors activated. Sep 8 23:45:58.794982 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:45:58.794989 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:45:58.794996 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:45:58.795014 kernel: CPU features: detected: Common not Private translations Sep 8 23:45:58.795021 kernel: CPU features: detected: CRC32 instructions Sep 8 23:45:58.795028 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:45:58.795035 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:45:58.795042 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:45:58.795049 kernel: CPU features: detected: Privileged Access Never Sep 8 23:45:58.795057 kernel: CPU features: detected: RAS Extension Support Sep 8 23:45:58.795063 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:45:58.795075 kernel: alternatives: applying system-wide alternatives Sep 8 23:45:58.795082 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 8 23:45:58.795090 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 8 23:45:58.795097 kernel: devtmpfs: initialized Sep 8 23:45:58.795104 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:45:58.795112 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:45:58.795119 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:45:58.795128 kernel: 0 pages in range for non-PLT usage Sep 8 23:45:58.795135 kernel: 508576 pages in range for PLT usage Sep 8 23:45:58.795142 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:45:58.795149 kernel: SMBIOS 3.0.0 present. Sep 8 23:45:58.795155 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:45:58.795163 kernel: DMI: Memory slots populated: 1/1 Sep 8 23:45:58.795170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:45:58.795177 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:45:58.795184 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:45:58.795193 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:45:58.795200 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:45:58.795207 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 8 23:45:58.795214 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:45:58.795222 kernel: cpuidle: using governor menu Sep 8 23:45:58.795229 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:45:58.795236 kernel: ASID allocator initialised with 32768 entries Sep 8 23:45:58.795243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:45:58.795250 kernel: Serial: AMBA PL011 UART driver Sep 8 23:45:58.795258 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:45:58.795265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:45:58.795272 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:45:58.795279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:45:58.795286 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:45:58.795293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:45:58.795300 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:45:58.795307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:45:58.795313 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:45:58.795321 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:45:58.795329 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:45:58.795335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:45:58.795342 kernel: ACPI: Interpreter enabled Sep 8 23:45:58.795349 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:45:58.795356 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:45:58.795441 kernel: ACPI: CPU0 has been hot-added Sep 8 23:45:58.795448 kernel: ACPI: CPU1 has been hot-added Sep 8 23:45:58.795455 kernel: ACPI: CPU2 has been hot-added Sep 8 23:45:58.795461 kernel: ACPI: CPU3 has been hot-added Sep 8 23:45:58.795471 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:45:58.795478 kernel: printk: legacy console [ttyAMA0] enabled Sep 8 23:45:58.795485 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:45:58.795639 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:45:58.795706 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:45:58.795764 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:45:58.795819 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:45:58.795877 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:45:58.795886 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:45:58.795894 kernel: PCI host bridge to bus 0000:00 Sep 8 23:45:58.795957 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:45:58.796014 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:45:58.796066 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:45:58.796116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:45:58.796198 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 8 23:45:58.796269 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 8 23:45:58.796329 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 8 23:45:58.796401 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 8 23:45:58.796460 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:45:58.796527 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 8 23:45:58.796599 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 8 23:45:58.796666 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 8 23:45:58.796719 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:45:58.796782 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:45:58.796833 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:45:58.796842 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:45:58.796850 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:45:58.796857 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:45:58.796866 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:45:58.796873 kernel: iommu: Default domain type: Translated Sep 8 23:45:58.796880 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:45:58.796887 kernel: efivars: Registered efivars operations Sep 8 23:45:58.796894 kernel: vgaarb: loaded Sep 8 23:45:58.796901 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:45:58.796907 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:45:58.796914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:45:58.796921 kernel: pnp: PnP ACPI init Sep 8 23:45:58.796989 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:45:58.796999 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:45:58.797006 kernel: NET: Registered PF_INET protocol family Sep 8 23:45:58.797013 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:45:58.797020 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:45:58.797028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:45:58.797035 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:45:58.797042 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:45:58.797051 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:45:58.797058 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:45:58.797065 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:45:58.797072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:45:58.797079 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:45:58.797085 kernel: kvm [1]: HYP mode not available Sep 8 23:45:58.797092 kernel: Initialise system trusted keyrings Sep 8 23:45:58.797099 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:45:58.797106 kernel: Key type asymmetric registered Sep 8 23:45:58.797115 kernel: Asymmetric key parser 'x509' registered Sep 8 23:45:58.797122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 8 23:45:58.797129 kernel: io scheduler mq-deadline registered Sep 8 23:45:58.797136 kernel: io scheduler kyber registered Sep 8 23:45:58.797143 kernel: io scheduler bfq registered Sep 8 23:45:58.797150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:45:58.797158 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:45:58.797165 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:45:58.797223 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:45:58.797233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:45:58.797240 kernel: thunder_xcv, ver 1.0 Sep 8 23:45:58.797247 kernel: thunder_bgx, ver 1.0 Sep 8 23:45:58.797254 kernel: nicpf, ver 1.0 Sep 8 23:45:58.797261 kernel: nicvf, ver 1.0 Sep 8 23:45:58.797325 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:45:58.797405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:45:58 UTC (1757375158) Sep 8 23:45:58.797416 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:45:58.797423 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 8 23:45:58.797432 kernel: watchdog: NMI not fully supported Sep 8 23:45:58.797439 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:45:58.797446 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:45:58.797453 kernel: Segment Routing with IPv6 Sep 8 23:45:58.797463 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:45:58.797473 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:45:58.797482 kernel: Key type dns_resolver registered Sep 8 23:45:58.797491 kernel: registered taskstats version 1 Sep 8 23:45:58.797498 kernel: Loading compiled-in X.509 certificates Sep 8 23:45:58.797507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: a394eaa34ffd7f1371a823c439a0662c32ae9397' Sep 8 23:45:58.797514 kernel: Demotion targets for Node 0: null Sep 8 23:45:58.797521 kernel: Key type .fscrypt registered Sep 8 23:45:58.797528 kernel: Key type fscrypt-provisioning registered Sep 8 23:45:58.797536 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:45:58.797543 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:45:58.797551 kernel: ima: No architecture policies found Sep 8 23:45:58.797558 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:45:58.797566 kernel: clk: Disabling unused clocks Sep 8 23:45:58.797574 kernel: PM: genpd: Disabling unused power domains Sep 8 23:45:58.797581 kernel: Warning: unable to open an initial console. Sep 8 23:45:58.797594 kernel: Freeing unused kernel memory: 38912K Sep 8 23:45:58.797601 kernel: Run /init as init process Sep 8 23:45:58.797608 kernel: with arguments: Sep 8 23:45:58.797615 kernel: /init Sep 8 23:45:58.797622 kernel: with environment: Sep 8 23:45:58.797629 kernel: HOME=/ Sep 8 23:45:58.797636 kernel: TERM=linux Sep 8 23:45:58.797644 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:45:58.797652 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:45:58.797662 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:45:58.797670 systemd[1]: Detected virtualization kvm. Sep 8 23:45:58.797678 systemd[1]: Detected architecture arm64. Sep 8 23:45:58.797685 systemd[1]: Running in initrd. Sep 8 23:45:58.797693 systemd[1]: No hostname configured, using default hostname. Sep 8 23:45:58.797702 systemd[1]: Hostname set to . Sep 8 23:45:58.797709 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:45:58.797716 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:45:58.797724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:45:58.797732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:45:58.797740 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:45:58.797748 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:45:58.797756 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:45:58.797765 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:45:58.797774 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:45:58.797782 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:45:58.797789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:45:58.797797 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:45:58.797805 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:45:58.797813 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:45:58.797822 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:45:58.797830 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:45:58.797838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:45:58.797846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:45:58.797853 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:45:58.797861 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:45:58.797868 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:45:58.797876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:45:58.797885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:45:58.797893 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:45:58.797900 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:45:58.797908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:45:58.797915 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:45:58.797928 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 8 23:45:58.797936 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:45:58.797944 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:45:58.797952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:45:58.797961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:45:58.797969 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:45:58.797977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:45:58.797984 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:45:58.797994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:45:58.798019 systemd-journald[244]: Collecting audit messages is disabled. Sep 8 23:45:58.798038 systemd-journald[244]: Journal started Sep 8 23:45:58.798058 systemd-journald[244]: Runtime Journal (/run/log/journal/8625f194b40a4fa98ff60b0d707badaa) is 6M, max 48.5M, 42.4M free. Sep 8 23:45:58.806448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:45:58.806495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:45:58.794981 systemd-modules-load[245]: Inserted module 'overlay' Sep 8 23:45:58.810849 kernel: Bridge firewalling registered Sep 8 23:45:58.809074 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 8 23:45:58.814703 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:45:58.814777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:45:58.817155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:45:58.821210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:45:58.823211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:45:58.824983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:45:58.835493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:45:58.840532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:45:58.844409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:45:58.844515 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 8 23:45:58.847645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:45:58.850624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:45:58.854085 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:45:58.857347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:45:58.883869 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=56d35272d6799b20efe64172ddb761aa9d752bf4ee92cd36e6693ce5e7a3b83d Sep 8 23:45:58.898192 systemd-resolved[288]: Positive Trust Anchors: Sep 8 23:45:58.898213 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:45:58.898250 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:45:58.903225 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 8 23:45:58.904193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:45:58.908636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:45:58.957399 kernel: SCSI subsystem initialized Sep 8 23:45:58.962381 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:45:58.970387 kernel: iscsi: registered transport (tcp) Sep 8 23:45:58.983415 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:45:58.983472 kernel: QLogic iSCSI HBA Driver Sep 8 23:45:59.000153 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:45:59.018953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:45:59.020598 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:45:59.067929 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:45:59.070461 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:45:59.135396 kernel: raid6: neonx8 gen() 15512 MB/s Sep 8 23:45:59.152384 kernel: raid6: neonx4 gen() 15654 MB/s Sep 8 23:45:59.169380 kernel: raid6: neonx2 gen() 13217 MB/s Sep 8 23:45:59.186389 kernel: raid6: neonx1 gen() 10434 MB/s Sep 8 23:45:59.203395 kernel: raid6: int64x8 gen() 6836 MB/s Sep 8 23:45:59.220380 kernel: raid6: int64x4 gen() 7335 MB/s Sep 8 23:45:59.237379 kernel: raid6: int64x2 gen() 6139 MB/s Sep 8 23:45:59.254378 kernel: raid6: int64x1 gen() 5052 MB/s Sep 8 23:45:59.254393 kernel: raid6: using algorithm neonx4 gen() 15654 MB/s Sep 8 23:45:59.271386 kernel: raid6: .... xor() 12324 MB/s, rmw enabled Sep 8 23:45:59.271409 kernel: raid6: using neon recovery algorithm Sep 8 23:45:59.276577 kernel: xor: measuring software checksum speed Sep 8 23:45:59.276601 kernel: 8regs : 21624 MB/sec Sep 8 23:45:59.277662 kernel: 32regs : 21693 MB/sec Sep 8 23:45:59.277675 kernel: arm64_neon : 28128 MB/sec Sep 8 23:45:59.277684 kernel: xor: using function: arm64_neon (28128 MB/sec) Sep 8 23:45:59.329398 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:45:59.336723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:45:59.339281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:45:59.375726 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 8 23:45:59.381957 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:45:59.384610 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:45:59.407445 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 8 23:45:59.430790 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:45:59.432990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:45:59.487404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:45:59.490335 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:45:59.547127 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:45:59.547395 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:45:59.549049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:45:59.549181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:45:59.556077 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:45:59.556098 kernel: GPT:9289727 != 19775487 Sep 8 23:45:59.556121 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:45:59.556130 kernel: GPT:9289727 != 19775487 Sep 8 23:45:59.556139 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:45:59.556147 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:45:59.554939 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:45:59.559295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:45:59.594344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:45:59.595883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:45:59.598103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:45:59.607382 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:45:59.614984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:45:59.621232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:45:59.622555 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:45:59.624896 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:45:59.627911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:45:59.630078 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:45:59.632986 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:45:59.634918 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:45:59.651059 disk-uuid[591]: Primary Header is updated. Sep 8 23:45:59.651059 disk-uuid[591]: Secondary Entries is updated. Sep 8 23:45:59.651059 disk-uuid[591]: Secondary Header is updated. Sep 8 23:45:59.655892 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:45:59.653746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:00.663449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:00.664179 disk-uuid[597]: The operation has completed successfully. Sep 8 23:46:00.701852 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:46:00.702943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:46:00.721258 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:46:00.740344 sh[611]: Success Sep 8 23:46:00.754102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:46:00.754149 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:46:00.754168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 8 23:46:00.762416 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 8 23:46:00.786389 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:46:00.789221 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:46:00.809273 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:46:00.816376 kernel: BTRFS: device fsid b6aa4556-53d3-40d0-8c29-11204db15da4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Sep 8 23:46:00.819700 kernel: BTRFS info (device dm-0): first mount of filesystem b6aa4556-53d3-40d0-8c29-11204db15da4 Sep 8 23:46:00.819722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:00.823567 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:46:00.823630 kernel: BTRFS info (device dm-0): enabling free space tree Sep 8 23:46:00.825610 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:46:00.826914 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:46:00.828404 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:46:00.829138 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:46:00.834303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:46:00.857407 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 8 23:46:00.859881 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:46:00.859914 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:00.863470 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:46:00.863508 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:46:00.869386 kernel: BTRFS info (device vda6): last unmount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:46:00.872525 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:46:00.875000 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:46:00.943229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:00.946767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:00.987468 systemd-networkd[800]: lo: Link UP Sep 8 23:46:00.987479 systemd-networkd[800]: lo: Gained carrier Sep 8 23:46:00.988206 systemd-networkd[800]: Enumeration completed Sep 8 23:46:00.989203 ignition[705]: Ignition 2.21.0 Sep 8 23:46:00.988477 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:00.989210 ignition[705]: Stage: fetch-offline Sep 8 23:46:00.989019 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:00.989244 ignition[705]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:00.989023 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:00.989251 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:00.990002 systemd-networkd[800]: eth0: Link UP Sep 8 23:46:00.989449 ignition[705]: parsed url from cmdline: "" Sep 8 23:46:00.990099 systemd-networkd[800]: eth0: Gained carrier Sep 8 23:46:00.989452 ignition[705]: no config URL provided Sep 8 23:46:00.990108 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:00.989457 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:46:00.990902 systemd[1]: Reached target network.target - Network. Sep 8 23:46:00.989467 ignition[705]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:46:01.007454 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:00.989487 ignition[705]: op(1): [started] loading QEMU firmware config module Sep 8 23:46:00.989492 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:46:01.003577 ignition[705]: op(1): [finished] loading QEMU firmware config module Sep 8 23:46:01.050697 ignition[705]: parsing config with SHA512: 660742f018eb6ccfcc69cacf9cb73dbd9a68f160ab9d87ca0ef41586de7f897049eda1fc75f28cbe8eccfd98c02c4f3367bcc6ca4d91b10a10d02997160d78a7 Sep 8 23:46:01.055481 unknown[705]: fetched base config from "system" Sep 8 23:46:01.055492 unknown[705]: fetched user config from "qemu" Sep 8 23:46:01.055931 ignition[705]: fetch-offline: fetch-offline passed Sep 8 23:46:01.055988 ignition[705]: Ignition finished successfully Sep 8 23:46:01.057898 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:01.059748 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:46:01.060468 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:46:01.090466 ignition[812]: Ignition 2.21.0 Sep 8 23:46:01.090481 ignition[812]: Stage: kargs Sep 8 23:46:01.091392 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:01.091405 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:01.092869 ignition[812]: kargs: kargs passed Sep 8 23:46:01.092920 ignition[812]: Ignition finished successfully Sep 8 23:46:01.098427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:46:01.101342 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:46:01.126952 ignition[821]: Ignition 2.21.0 Sep 8 23:46:01.126972 ignition[821]: Stage: disks Sep 8 23:46:01.127117 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:01.127126 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:01.128984 ignition[821]: disks: disks passed Sep 8 23:46:01.130644 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:46:01.129060 ignition[821]: Ignition finished successfully Sep 8 23:46:01.132287 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:01.133650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:46:01.135316 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:01.137244 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:01.139186 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:01.141783 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:46:01.165117 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 8 23:46:01.170243 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:46:01.172524 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:46:01.249387 kernel: EXT4-fs (vda9): mounted filesystem 12f0e8f7-98bc-449e-b11f-df07384be687 r/w with ordered data mode. Quota mode: none. Sep 8 23:46:01.249466 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:46:01.250823 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:01.254491 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:01.257241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:46:01.258392 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:46:01.258434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:46:01.258457 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:01.267944 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:46:01.270346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:46:01.275393 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 8 23:46:01.277373 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:46:01.277404 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:01.279697 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:46:01.279749 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:46:01.281400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:01.304444 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:46:01.309609 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:46:01.314434 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:46:01.317859 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:46:01.385618 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:01.388094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:46:01.390526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:46:01.406384 kernel: BTRFS info (device vda6): last unmount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:46:01.424535 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:46:01.436694 ignition[952]: INFO : Ignition 2.21.0 Sep 8 23:46:01.436694 ignition[952]: INFO : Stage: mount Sep 8 23:46:01.439300 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:01.439300 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:01.439300 ignition[952]: INFO : mount: mount passed Sep 8 23:46:01.439300 ignition[952]: INFO : Ignition finished successfully Sep 8 23:46:01.440731 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:46:01.444454 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:46:01.817623 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:46:01.819067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:01.853375 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 8 23:46:01.855425 kernel: BTRFS info (device vda6): first mount of filesystem 0ac87192-1b33-43df-818c-9161f04c3e9c Sep 8 23:46:01.855457 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:01.861389 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:46:01.861428 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:46:01.862677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:01.889496 ignition[980]: INFO : Ignition 2.21.0 Sep 8 23:46:01.889496 ignition[980]: INFO : Stage: files Sep 8 23:46:01.891230 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:01.891230 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:01.893708 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:46:01.895781 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:46:01.895781 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:46:01.899661 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:46:01.901151 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:46:01.901151 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:46:01.900324 unknown[980]: wrote ssh authorized keys file for user: core Sep 8 23:46:01.905440 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:46:01.907692 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 8 23:46:02.460529 systemd-networkd[800]: eth0: Gained IPv6LL Sep 8 23:46:02.788675 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:46:03.236912 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:46:03.236912 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:46:03.241068 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 8 23:46:03.356474 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:46:03.490461 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:46:03.490461 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:03.494940 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:03.513834 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:03.513834 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:03.513834 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 8 23:46:03.760159 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:46:04.530614 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:04.530614 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:46:04.535140 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:04.551560 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:04.554968 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:04.557500 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:04.557500 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:46:04.557500 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:46:04.557500 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:04.557500 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:04.557500 ignition[980]: INFO : files: files passed Sep 8 23:46:04.557500 ignition[980]: INFO : Ignition finished successfully Sep 8 23:46:04.561421 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:46:04.565134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:46:04.567508 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:46:04.581723 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:46:04.581829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:46:04.585289 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:46:04.586681 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:04.586681 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:04.592016 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:04.587680 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:04.590706 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:46:04.593965 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:46:04.637385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:46:04.638484 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:46:04.639967 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:46:04.641891 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:46:04.643812 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:46:04.644601 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:46:04.670410 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:04.672849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:46:04.693331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:04.694612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:04.696640 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:46:04.698428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:46:04.698553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:04.701279 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:46:04.702471 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:46:04.704337 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:46:04.706285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:04.708135 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:04.710171 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:46:04.712260 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:46:04.714203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:46:04.716343 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:46:04.718196 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:46:04.720264 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:46:04.721916 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:46:04.722044 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:04.724420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:04.726334 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:04.728355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:46:04.728473 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:04.730578 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:46:04.730693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:46:04.733303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:46:04.733435 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:04.735916 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:46:04.737497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:46:04.737632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:04.739636 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:46:04.741320 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:46:04.743284 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:46:04.743384 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:46:04.745041 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:46:04.745115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:46:04.746981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:46:04.747091 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:04.749552 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:46:04.749666 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:46:04.752060 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:46:04.753709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:46:04.753832 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:04.756424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:46:04.757353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:46:04.757510 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:04.759547 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:46:04.759663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:46:04.765354 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:46:04.769597 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:46:04.777793 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:46:04.790489 ignition[1036]: INFO : Ignition 2.21.0 Sep 8 23:46:04.790489 ignition[1036]: INFO : Stage: umount Sep 8 23:46:04.794151 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:04.794151 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:04.794151 ignition[1036]: INFO : umount: umount passed Sep 8 23:46:04.794151 ignition[1036]: INFO : Ignition finished successfully Sep 8 23:46:04.794591 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:46:04.794690 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:46:04.796458 systemd[1]: Stopped target network.target - Network. Sep 8 23:46:04.798230 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:46:04.798287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:46:04.799936 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:46:04.799977 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:46:04.801591 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:46:04.801643 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:46:04.803477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:46:04.803520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:46:04.805528 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:46:04.807215 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:46:04.814115 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:46:04.814243 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:46:04.817315 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:46:04.817595 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:46:04.817633 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:04.821182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:46:04.826435 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:46:04.826558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:46:04.830420 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:46:04.830529 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 8 23:46:04.831877 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:46:04.831908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:04.835833 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:46:04.837672 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:46:04.837736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:04.839863 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:46:04.839908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:04.842994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:46:04.843035 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:04.845371 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:04.850007 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:46:04.864153 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:46:04.864559 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:04.866727 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:46:04.866828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:46:04.868758 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:46:04.868833 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:46:04.871062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:46:04.871119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:04.872259 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:46:04.872289 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:04.874123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:46:04.874175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:46:04.877083 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:46:04.877137 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:46:04.879243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:46:04.879299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:04.882338 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:46:04.882411 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:04.885188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:46:04.886320 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 8 23:46:04.886386 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:46:04.891635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:46:04.891676 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:04.897439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:46:04.897480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:04.907685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:46:04.907800 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:46:04.910081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:46:04.913345 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:46:04.938209 systemd[1]: Switching root. Sep 8 23:46:04.973769 systemd-journald[244]: Journal stopped Sep 8 23:46:05.777938 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 8 23:46:05.777987 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:46:05.778001 kernel: SELinux: policy capability open_perms=1 Sep 8 23:46:05.778012 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:46:05.778026 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:46:05.778035 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:46:05.778043 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:46:05.778052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:46:05.778061 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:46:05.778075 kernel: SELinux: policy capability userspace_initial_context=0 Sep 8 23:46:05.778085 kernel: audit: type=1403 audit(1757375165.184:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:46:05.778101 systemd[1]: Successfully loaded SELinux policy in 65.239ms. Sep 8 23:46:05.778117 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.430ms. Sep 8 23:46:05.778128 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:46:05.778139 systemd[1]: Detected virtualization kvm. Sep 8 23:46:05.778149 systemd[1]: Detected architecture arm64. Sep 8 23:46:05.778158 systemd[1]: Detected first boot. Sep 8 23:46:05.778168 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:46:05.778178 zram_generator::config[1083]: No configuration found. Sep 8 23:46:05.778188 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:46:05.778199 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:46:05.778210 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:46:05.778220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:46:05.778231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:46:05.778294 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:46:05.778309 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:46:05.778319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:46:05.778329 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:46:05.778340 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:46:05.778353 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:46:05.778379 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:46:05.778391 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:46:05.778401 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:46:05.778411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:05.778421 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:05.778431 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:46:05.778441 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:46:05.778453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:46:05.778464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:46:05.778474 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:46:05.778484 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:05.778494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:05.778504 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:46:05.778513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:46:05.778523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:05.778534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:46:05.778543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:05.778553 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:46:05.778569 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:46:05.778582 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:46:05.778593 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:46:05.778602 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:46:05.778613 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:46:05.778623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:05.778634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:05.778645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:05.778654 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:46:05.778664 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:46:05.778674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:46:05.778683 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:46:05.778693 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:46:05.778703 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:46:05.778713 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:46:05.778724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:46:05.778734 systemd[1]: Reached target machines.target - Containers. Sep 8 23:46:05.778745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:46:05.778755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:05.778764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:46:05.778774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:46:05.778784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:05.778794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:05.778804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:05.778815 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:46:05.778825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:05.778835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:46:05.778845 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:46:05.778855 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:46:05.778864 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:46:05.778874 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:46:05.778884 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:05.778895 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:46:05.778905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:46:05.778915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:46:05.778925 kernel: loop: module loaded Sep 8 23:46:05.778934 kernel: fuse: init (API version 7.41) Sep 8 23:46:05.778944 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:46:05.778954 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:46:05.778963 kernel: ACPI: bus type drm_connector registered Sep 8 23:46:05.778972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:46:05.778983 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:46:05.778993 systemd[1]: Stopped verity-setup.service. Sep 8 23:46:05.779003 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:46:05.779013 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:46:05.779022 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:46:05.779061 systemd-journald[1157]: Collecting audit messages is disabled. Sep 8 23:46:05.779084 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:46:05.779095 systemd-journald[1157]: Journal started Sep 8 23:46:05.779115 systemd-journald[1157]: Runtime Journal (/run/log/journal/8625f194b40a4fa98ff60b0d707badaa) is 6M, max 48.5M, 42.4M free. Sep 8 23:46:05.557687 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:46:05.580326 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:46:05.580724 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:46:05.782587 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:46:05.783226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:46:05.784625 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:46:05.787445 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:46:05.789456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:05.791703 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:46:05.792453 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:46:05.793958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:05.794136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:05.795671 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:05.795831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:05.797221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:05.797402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:05.799072 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:46:05.800409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:46:05.801984 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:05.802142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:05.805399 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:05.806898 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:46:05.808607 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:46:05.810336 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:46:05.819471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:05.824992 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:46:05.827299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:46:05.829315 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:46:05.830651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:46:05.830685 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:05.832584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:46:05.839685 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:46:05.840946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:05.842403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:46:05.844671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:46:05.846349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:05.847454 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:46:05.848656 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:05.850499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:46:05.853602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:46:05.857630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:46:05.860250 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:46:05.862680 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:46:05.864962 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:46:05.867844 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:46:05.869308 systemd-journald[1157]: Time spent on flushing to /var/log/journal/8625f194b40a4fa98ff60b0d707badaa is 15.522ms for 894 entries. Sep 8 23:46:05.869308 systemd-journald[1157]: System Journal (/var/log/journal/8625f194b40a4fa98ff60b0d707badaa) is 8M, max 195.6M, 187.6M free. Sep 8 23:46:05.893702 systemd-journald[1157]: Received client request to flush runtime journal. Sep 8 23:46:05.893756 kernel: loop0: detected capacity change from 0 to 100608 Sep 8 23:46:05.893779 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:46:05.870692 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:46:05.877447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:05.899640 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:46:05.903187 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:46:05.905165 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:46:05.909654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:46:05.915490 kernel: loop1: detected capacity change from 0 to 119320 Sep 8 23:46:05.931797 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 8 23:46:05.931814 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 8 23:46:05.935238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:05.946473 kernel: loop2: detected capacity change from 0 to 207008 Sep 8 23:46:05.978402 kernel: loop3: detected capacity change from 0 to 100608 Sep 8 23:46:05.983381 kernel: loop4: detected capacity change from 0 to 119320 Sep 8 23:46:05.989381 kernel: loop5: detected capacity change from 0 to 207008 Sep 8 23:46:05.993706 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:46:05.994077 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 8 23:46:05.997776 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:46:05.997935 systemd[1]: Reloading... Sep 8 23:46:06.064598 zram_generator::config[1251]: No configuration found. Sep 8 23:46:06.117925 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:46:06.208097 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:46:06.208429 systemd[1]: Reloading finished in 210 ms. Sep 8 23:46:06.238202 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:46:06.239801 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:46:06.254638 systemd[1]: Starting ensure-sysext.service... Sep 8 23:46:06.256466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:46:06.271637 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:46:06.271654 systemd[1]: Reloading... Sep 8 23:46:06.274253 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 8 23:46:06.274282 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 8 23:46:06.274652 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:46:06.274839 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:46:06.275495 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:46:06.275721 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 8 23:46:06.275775 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 8 23:46:06.278007 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:06.278022 systemd-tmpfiles[1286]: Skipping /boot Sep 8 23:46:06.284196 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:06.284213 systemd-tmpfiles[1286]: Skipping /boot Sep 8 23:46:06.317385 zram_generator::config[1313]: No configuration found. Sep 8 23:46:06.447201 systemd[1]: Reloading finished in 175 ms. Sep 8 23:46:06.472392 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:46:06.477958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:06.488352 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:06.490645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:46:06.493189 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:46:06.497558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:46:06.500620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:06.503537 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:46:06.512025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:06.513766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:06.517156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:06.519190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:06.520318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:06.520450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:06.522477 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:46:06.526373 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:46:06.528329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:06.528501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:06.532013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:06.532164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:06.534273 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:06.534469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:06.542501 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 8 23:46:06.543357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:06.545096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:06.548273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:06.551631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:06.552934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:06.553055 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:06.554201 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:46:06.558302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:46:06.561221 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:46:06.567735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:06.567855 augenrules[1389]: No rules Sep 8 23:46:06.569308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:06.570685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:06.570796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:06.570904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:46:06.571540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:06.573294 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:46:06.575981 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:06.576200 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:06.577703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:06.579416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:06.581074 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:06.581217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:06.584208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:46:06.597251 systemd[1]: Finished ensure-sysext.service. Sep 8 23:46:06.606201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:06.606413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:06.608138 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:06.608410 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:06.616574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:06.617691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:06.617758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:06.619593 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:46:06.644923 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:46:06.696579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:46:06.700114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:46:06.734211 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:46:06.737618 systemd-networkd[1434]: lo: Link UP Sep 8 23:46:06.737625 systemd-networkd[1434]: lo: Gained carrier Sep 8 23:46:06.738380 systemd-networkd[1434]: Enumeration completed Sep 8 23:46:06.738485 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:06.738798 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:06.738807 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:06.739316 systemd-networkd[1434]: eth0: Link UP Sep 8 23:46:06.739465 systemd-networkd[1434]: eth0: Gained carrier Sep 8 23:46:06.739482 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:06.740266 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:46:06.741874 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:46:06.744344 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:46:06.745092 systemd-resolved[1353]: Positive Trust Anchors: Sep 8 23:46:06.745327 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:46:06.745382 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:46:06.746866 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:46:06.752434 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:06.753302 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 8 23:46:06.753454 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Sep 8 23:46:06.755003 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:46:06.755054 systemd-timesyncd[1435]: Initial clock synchronization to Mon 2025-09-08 23:46:06.862795 UTC. Sep 8 23:46:06.755740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:46:06.757669 systemd[1]: Reached target network.target - Network. Sep 8 23:46:06.758894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:06.761460 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:06.762717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:46:06.764215 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:46:06.766451 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:46:06.768987 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:46:06.770289 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:46:06.771602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:46:06.771634 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:46:06.772616 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:46:06.774389 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:46:06.776591 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:46:06.779446 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:46:06.780587 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:46:06.782247 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:46:06.785840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:46:06.787663 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:46:06.792173 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:46:06.793823 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:46:06.795653 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:46:06.796653 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:06.797673 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:06.797706 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:06.800474 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:46:06.804678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:46:06.821650 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:46:06.823871 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:46:06.826691 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:46:06.827722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:46:06.834571 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:46:06.837499 jq[1471]: false Sep 8 23:46:06.838374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:46:06.840685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:46:06.843491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:46:06.846222 extend-filesystems[1472]: Found /dev/vda6 Sep 8 23:46:06.846550 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:46:06.848273 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:46:06.849773 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:46:06.850290 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:46:06.852346 extend-filesystems[1472]: Found /dev/vda9 Sep 8 23:46:06.854572 extend-filesystems[1472]: Checking size of /dev/vda9 Sep 8 23:46:06.855656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:46:06.860425 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:46:06.862805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:46:06.864411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:46:06.864712 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:46:06.864896 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:46:06.867329 extend-filesystems[1472]: Resized partition /dev/vda9 Sep 8 23:46:06.869857 extend-filesystems[1499]: resize2fs 1.47.2 (1-Jan-2025) Sep 8 23:46:06.871664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:46:06.871853 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:46:06.872570 update_engine[1487]: I20250908 23:46:06.872340 1487 main.cc:92] Flatcar Update Engine starting Sep 8 23:46:06.876611 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:46:06.886425 jq[1489]: true Sep 8 23:46:06.891514 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:46:06.903109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:06.905386 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:46:06.915581 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:46:06.915581 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:46:06.915581 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:46:06.919107 extend-filesystems[1472]: Resized filesystem in /dev/vda9 Sep 8 23:46:06.918379 dbus-daemon[1469]: [system] SELinux support is enabled Sep 8 23:46:06.919884 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:46:06.924978 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:46:06.925177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:46:06.925901 update_engine[1487]: I20250908 23:46:06.925488 1487 update_check_scheduler.cc:74] Next update check in 11m3s Sep 8 23:46:06.942830 jq[1509]: true Sep 8 23:46:06.943055 tar[1498]: linux-arm64/LICENSE Sep 8 23:46:06.943055 tar[1498]: linux-arm64/helm Sep 8 23:46:06.944104 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:46:06.945817 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:46:06.946235 systemd-logind[1482]: New seat seat0. Sep 8 23:46:06.946595 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:46:06.946630 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:46:06.948098 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:46:06.948121 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:46:06.950710 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:46:06.953154 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:46:06.992799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:07.013325 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:46:07.016150 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:46:07.018262 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:46:07.026749 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:46:07.091555 containerd[1500]: time="2025-09-08T23:46:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 8 23:46:07.092721 containerd[1500]: time="2025-09-08T23:46:07.092653773Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 8 23:46:07.102131 containerd[1500]: time="2025-09-08T23:46:07.102082487Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.73µs" Sep 8 23:46:07.102131 containerd[1500]: time="2025-09-08T23:46:07.102126108Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 8 23:46:07.102232 containerd[1500]: time="2025-09-08T23:46:07.102144066Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 8 23:46:07.102312 containerd[1500]: time="2025-09-08T23:46:07.102292603Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 8 23:46:07.102344 containerd[1500]: time="2025-09-08T23:46:07.102317129Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 8 23:46:07.102344 containerd[1500]: time="2025-09-08T23:46:07.102341290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102425 containerd[1500]: time="2025-09-08T23:46:07.102405950Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102445 containerd[1500]: time="2025-09-08T23:46:07.102423180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102678 containerd[1500]: time="2025-09-08T23:46:07.102646673Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102678 containerd[1500]: time="2025-09-08T23:46:07.102668605Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102732 containerd[1500]: time="2025-09-08T23:46:07.102682023Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102732 containerd[1500]: time="2025-09-08T23:46:07.102690496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102786 containerd[1500]: time="2025-09-08T23:46:07.102765169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 8 23:46:07.102986 containerd[1500]: time="2025-09-08T23:46:07.102966366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:46:07.103015 containerd[1500]: time="2025-09-08T23:46:07.103002122Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:46:07.103044 containerd[1500]: time="2025-09-08T23:46:07.103015256Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 8 23:46:07.103192 containerd[1500]: time="2025-09-08T23:46:07.103171495Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 8 23:46:07.103983 containerd[1500]: time="2025-09-08T23:46:07.103736815Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 8 23:46:07.103983 containerd[1500]: time="2025-09-08T23:46:07.103829326Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:46:07.107141 containerd[1500]: time="2025-09-08T23:46:07.107097849Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 8 23:46:07.107208 containerd[1500]: time="2025-09-08T23:46:07.107170090Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 8 23:46:07.107208 containerd[1500]: time="2025-09-08T23:46:07.107185981Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 8 23:46:07.107208 containerd[1500]: time="2025-09-08T23:46:07.107199116Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107210872Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107222021Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107233007Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107244398Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107255141Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107264952Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 8 23:46:07.107279 containerd[1500]: time="2025-09-08T23:46:07.107273911Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 8 23:46:07.107464 containerd[1500]: time="2025-09-08T23:46:07.107286438Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 8 23:46:07.107464 containerd[1500]: time="2025-09-08T23:46:07.107439514Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 8 23:46:07.107464 containerd[1500]: time="2025-09-08T23:46:07.107462257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 8 23:46:07.107516 containerd[1500]: time="2025-09-08T23:46:07.107476567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 8 23:46:07.107516 containerd[1500]: time="2025-09-08T23:46:07.107488242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 8 23:46:07.107516 containerd[1500]: time="2025-09-08T23:46:07.107498296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 8 23:46:07.107516 containerd[1500]: time="2025-09-08T23:46:07.107513052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 8 23:46:07.107588 containerd[1500]: time="2025-09-08T23:46:07.107524484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 8 23:46:07.107588 containerd[1500]: time="2025-09-08T23:46:07.107535025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 8 23:46:07.107588 containerd[1500]: time="2025-09-08T23:46:07.107548362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 8 23:46:07.107588 containerd[1500]: time="2025-09-08T23:46:07.107559470Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 8 23:46:07.107588 containerd[1500]: time="2025-09-08T23:46:07.107570415Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 8 23:46:07.107773 containerd[1500]: time="2025-09-08T23:46:07.107755883Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 8 23:46:07.107797 containerd[1500]: time="2025-09-08T23:46:07.107776963Z" level=info msg="Start snapshots syncer" Sep 8 23:46:07.107797 containerd[1500]: time="2025-09-08T23:46:07.107802381Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 8 23:46:07.108051 containerd[1500]: time="2025-09-08T23:46:07.108016267Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 8 23:46:07.108152 containerd[1500]: time="2025-09-08T23:46:07.108064387Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 8 23:46:07.108152 containerd[1500]: time="2025-09-08T23:46:07.108137074Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 8 23:46:07.108300 containerd[1500]: time="2025-09-08T23:46:07.108243206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 8 23:46:07.108300 containerd[1500]: time="2025-09-08T23:46:07.108283826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 8 23:46:07.108300 containerd[1500]: time="2025-09-08T23:46:07.108298947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 8 23:46:07.108381 containerd[1500]: time="2025-09-08T23:46:07.108309650Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 8 23:46:07.108381 containerd[1500]: time="2025-09-08T23:46:07.108322095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 8 23:46:07.108381 containerd[1500]: time="2025-09-08T23:46:07.108333325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 8 23:46:07.108381 containerd[1500]: time="2025-09-08T23:46:07.108343784Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108367702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108405241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108418295Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108453240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108466780Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108474645Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108484334Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108493414Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108507036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 8 23:46:07.108525 containerd[1500]: time="2025-09-08T23:46:07.108518346Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 8 23:46:07.108693 containerd[1500]: time="2025-09-08T23:46:07.108594803Z" level=info msg="runtime interface created" Sep 8 23:46:07.108693 containerd[1500]: time="2025-09-08T23:46:07.108601857Z" level=info msg="created NRI interface" Sep 8 23:46:07.108693 containerd[1500]: time="2025-09-08T23:46:07.108609843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 8 23:46:07.108693 containerd[1500]: time="2025-09-08T23:46:07.108620181Z" level=info msg="Connect containerd service" Sep 8 23:46:07.108693 containerd[1500]: time="2025-09-08T23:46:07.108647464Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:46:07.109334 containerd[1500]: time="2025-09-08T23:46:07.109306633Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:46:07.181161 containerd[1500]: time="2025-09-08T23:46:07.181087767Z" level=info msg="Start subscribing containerd event" Sep 8 23:46:07.181376 containerd[1500]: time="2025-09-08T23:46:07.181302382Z" level=info msg="Start recovering state" Sep 8 23:46:07.181376 containerd[1500]: time="2025-09-08T23:46:07.181354191Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:46:07.181424 containerd[1500]: time="2025-09-08T23:46:07.181416865Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:46:07.181580 containerd[1500]: time="2025-09-08T23:46:07.181556037Z" level=info msg="Start event monitor" Sep 8 23:46:07.181638 containerd[1500]: time="2025-09-08T23:46:07.181626737Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:46:07.181730 containerd[1500]: time="2025-09-08T23:46:07.181717667Z" level=info msg="Start streaming server" Sep 8 23:46:07.181841 containerd[1500]: time="2025-09-08T23:46:07.181788003Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 8 23:46:07.181841 containerd[1500]: time="2025-09-08T23:46:07.181799962Z" level=info msg="runtime interface starting up..." Sep 8 23:46:07.181841 containerd[1500]: time="2025-09-08T23:46:07.181806205Z" level=info msg="starting plugins..." Sep 8 23:46:07.181841 containerd[1500]: time="2025-09-08T23:46:07.181822258Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 8 23:46:07.182189 containerd[1500]: time="2025-09-08T23:46:07.182162099Z" level=info msg="containerd successfully booted in 0.090958s" Sep 8 23:46:07.182264 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:46:07.247682 tar[1498]: linux-arm64/README.md Sep 8 23:46:07.266438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:46:07.586986 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:46:07.606400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:46:07.609124 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:46:07.632964 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:46:07.633227 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:46:07.635975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:46:07.670691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:46:07.674640 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:46:07.677106 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:46:07.678632 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:46:08.476751 systemd-networkd[1434]: eth0: Gained IPv6LL Sep 8 23:46:08.479248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:46:08.481378 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:46:08.483958 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:46:08.486668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:08.509989 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:46:08.525621 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:46:08.526630 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:46:08.528813 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:46:08.531201 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:46:09.084418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:09.086699 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:46:09.088723 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:46:09.088954 systemd[1]: Startup finished in 2.014s (kernel) + 6.561s (initrd) + 3.970s (userspace) = 12.547s. Sep 8 23:46:09.468748 kubelet[1607]: E0908 23:46:09.468627 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:46:09.470874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:46:09.471009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:46:09.471415 systemd[1]: kubelet.service: Consumed 751ms CPU time, 257.5M memory peak. Sep 8 23:46:12.028306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:46:12.029547 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:44142.service - OpenSSH per-connection server daemon (10.0.0.1:44142). Sep 8 23:46:12.110646 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 44142 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:12.112988 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:12.119221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:46:12.120136 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:46:12.125948 systemd-logind[1482]: New session 1 of user core. Sep 8 23:46:12.145525 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:46:12.147770 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:46:12.170534 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:46:12.172705 systemd-logind[1482]: New session c1 of user core. Sep 8 23:46:12.282955 systemd[1625]: Queued start job for default target default.target. Sep 8 23:46:12.298461 systemd[1625]: Created slice app.slice - User Application Slice. Sep 8 23:46:12.298488 systemd[1625]: Reached target paths.target - Paths. Sep 8 23:46:12.298532 systemd[1625]: Reached target timers.target - Timers. Sep 8 23:46:12.299797 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:46:12.309960 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:46:12.310026 systemd[1625]: Reached target sockets.target - Sockets. Sep 8 23:46:12.310064 systemd[1625]: Reached target basic.target - Basic System. Sep 8 23:46:12.310090 systemd[1625]: Reached target default.target - Main User Target. Sep 8 23:46:12.310116 systemd[1625]: Startup finished in 131ms. Sep 8 23:46:12.310260 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:46:12.311827 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:46:12.377730 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:44156.service - OpenSSH per-connection server daemon (10.0.0.1:44156). Sep 8 23:46:12.457198 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 44156 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:12.458602 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:12.463236 systemd-logind[1482]: New session 2 of user core. Sep 8 23:46:12.472584 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:46:12.526774 sshd[1639]: Connection closed by 10.0.0.1 port 44156 Sep 8 23:46:12.527125 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:12.551092 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:44156.service: Deactivated successfully. Sep 8 23:46:12.554602 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:46:12.557293 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:46:12.560048 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:44158.service - OpenSSH per-connection server daemon (10.0.0.1:44158). Sep 8 23:46:12.561730 systemd-logind[1482]: Removed session 2. Sep 8 23:46:12.618580 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 44158 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:12.622396 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:12.627734 systemd-logind[1482]: New session 3 of user core. Sep 8 23:46:12.648557 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:46:12.697252 sshd[1648]: Connection closed by 10.0.0.1 port 44158 Sep 8 23:46:12.697174 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:12.708614 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:44158.service: Deactivated successfully. Sep 8 23:46:12.710521 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:46:12.713116 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:46:12.715761 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:44168.service - OpenSSH per-connection server daemon (10.0.0.1:44168). Sep 8 23:46:12.716897 systemd-logind[1482]: Removed session 3. Sep 8 23:46:12.786999 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 44168 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:12.790772 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:12.794767 systemd-logind[1482]: New session 4 of user core. Sep 8 23:46:12.807553 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:46:12.861808 sshd[1658]: Connection closed by 10.0.0.1 port 44168 Sep 8 23:46:12.862306 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:12.874481 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:44168.service: Deactivated successfully. Sep 8 23:46:12.877457 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:46:12.879422 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:46:12.881651 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:44184.service - OpenSSH per-connection server daemon (10.0.0.1:44184). Sep 8 23:46:12.882446 systemd-logind[1482]: Removed session 4. Sep 8 23:46:12.938813 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 44184 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:12.940090 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:12.943742 systemd-logind[1482]: New session 5 of user core. Sep 8 23:46:12.953530 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:46:13.011452 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:46:13.011731 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:13.032275 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:13.033892 sshd[1667]: Connection closed by 10.0.0.1 port 44184 Sep 8 23:46:13.034449 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:13.058332 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:44184.service: Deactivated successfully. Sep 8 23:46:13.060637 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:46:13.065284 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:46:13.068519 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:44194.service - OpenSSH per-connection server daemon (10.0.0.1:44194). Sep 8 23:46:13.070099 systemd-logind[1482]: Removed session 5. Sep 8 23:46:13.138430 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 44194 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:13.139867 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:13.146641 systemd-logind[1482]: New session 6 of user core. Sep 8 23:46:13.158899 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:46:13.211852 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:46:13.212131 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:13.256974 sudo[1679]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:13.262080 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:46:13.262418 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:13.278034 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:13.311884 augenrules[1701]: No rules Sep 8 23:46:13.313610 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:13.313844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:13.316032 sudo[1678]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:13.318390 sshd[1677]: Connection closed by 10.0.0.1 port 44194 Sep 8 23:46:13.317842 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:13.335170 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:44194.service: Deactivated successfully. Sep 8 23:46:13.337891 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:46:13.339712 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:46:13.341589 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). Sep 8 23:46:13.346278 systemd-logind[1482]: Removed session 6. Sep 8 23:46:13.401395 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:46:13.403333 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:13.408025 systemd-logind[1482]: New session 7 of user core. Sep 8 23:46:13.418551 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:46:13.471603 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:46:13.471867 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:13.768162 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:46:13.786763 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:46:13.993160 dockerd[1736]: time="2025-09-08T23:46:13.993091845Z" level=info msg="Starting up" Sep 8 23:46:13.994100 dockerd[1736]: time="2025-09-08T23:46:13.994077453Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 8 23:46:14.004491 dockerd[1736]: time="2025-09-08T23:46:14.004448713Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 8 23:46:14.105855 dockerd[1736]: time="2025-09-08T23:46:14.105733393Z" level=info msg="Loading containers: start." Sep 8 23:46:14.114395 kernel: Initializing XFRM netlink socket Sep 8 23:46:14.328021 systemd-networkd[1434]: docker0: Link UP Sep 8 23:46:14.331674 dockerd[1736]: time="2025-09-08T23:46:14.331558035Z" level=info msg="Loading containers: done." Sep 8 23:46:14.346683 dockerd[1736]: time="2025-09-08T23:46:14.346625913Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:46:14.346838 dockerd[1736]: time="2025-09-08T23:46:14.346722180Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 8 23:46:14.346838 dockerd[1736]: time="2025-09-08T23:46:14.346808636Z" level=info msg="Initializing buildkit" Sep 8 23:46:14.368477 dockerd[1736]: time="2025-09-08T23:46:14.368356282Z" level=info msg="Completed buildkit initialization" Sep 8 23:46:14.377885 dockerd[1736]: time="2025-09-08T23:46:14.377816225Z" level=info msg="Daemon has completed initialization" Sep 8 23:46:14.378098 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:46:14.378463 dockerd[1736]: time="2025-09-08T23:46:14.377879196Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:46:14.880607 containerd[1500]: time="2025-09-08T23:46:14.880571509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 8 23:46:15.470035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184646058.mount: Deactivated successfully. Sep 8 23:46:16.324885 containerd[1500]: time="2025-09-08T23:46:16.324804177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:16.325703 containerd[1500]: time="2025-09-08T23:46:16.325643285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 8 23:46:16.327116 containerd[1500]: time="2025-09-08T23:46:16.327077354Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:16.330212 containerd[1500]: time="2025-09-08T23:46:16.330174298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:16.331791 containerd[1500]: time="2025-09-08T23:46:16.331644072Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.451031638s" Sep 8 23:46:16.331791 containerd[1500]: time="2025-09-08T23:46:16.331679937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 8 23:46:16.332763 containerd[1500]: time="2025-09-08T23:46:16.332729173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 8 23:46:17.399068 containerd[1500]: time="2025-09-08T23:46:17.399000909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:17.400281 containerd[1500]: time="2025-09-08T23:46:17.400095137Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 8 23:46:17.400901 containerd[1500]: time="2025-09-08T23:46:17.400848319Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:17.403729 containerd[1500]: time="2025-09-08T23:46:17.403689200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:17.404682 containerd[1500]: time="2025-09-08T23:46:17.404646785Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.071879382s" Sep 8 23:46:17.404726 containerd[1500]: time="2025-09-08T23:46:17.404680865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 8 23:46:17.405079 containerd[1500]: time="2025-09-08T23:46:17.405049287Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 8 23:46:18.378797 containerd[1500]: time="2025-09-08T23:46:18.378743282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:18.379499 containerd[1500]: time="2025-09-08T23:46:18.379470331Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 8 23:46:18.379937 containerd[1500]: time="2025-09-08T23:46:18.379913543Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:18.382623 containerd[1500]: time="2025-09-08T23:46:18.382594197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:18.383962 containerd[1500]: time="2025-09-08T23:46:18.383937513Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 978.852383ms" Sep 8 23:46:18.384002 containerd[1500]: time="2025-09-08T23:46:18.383981168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 8 23:46:18.384433 containerd[1500]: time="2025-09-08T23:46:18.384396493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:46:19.307346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434323984.mount: Deactivated successfully. Sep 8 23:46:19.663220 containerd[1500]: time="2025-09-08T23:46:19.663087588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:19.664151 containerd[1500]: time="2025-09-08T23:46:19.664115813Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 8 23:46:19.664984 containerd[1500]: time="2025-09-08T23:46:19.664951917Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:19.666677 containerd[1500]: time="2025-09-08T23:46:19.666640210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:19.667459 containerd[1500]: time="2025-09-08T23:46:19.667427622Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.282991128s" Sep 8 23:46:19.667501 containerd[1500]: time="2025-09-08T23:46:19.667462958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 8 23:46:19.668124 containerd[1500]: time="2025-09-08T23:46:19.668041966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:46:19.721485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:46:19.722944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:19.876493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:19.881879 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:46:19.922026 kubelet[2034]: E0908 23:46:19.921900 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:46:19.925605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:46:19.925753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:46:19.927317 systemd[1]: kubelet.service: Consumed 154ms CPU time, 108M memory peak. Sep 8 23:46:20.232951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011040170.mount: Deactivated successfully. Sep 8 23:46:21.010591 containerd[1500]: time="2025-09-08T23:46:21.009728471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:21.010591 containerd[1500]: time="2025-09-08T23:46:21.010121045Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 8 23:46:21.011583 containerd[1500]: time="2025-09-08T23:46:21.011552015Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:21.014866 containerd[1500]: time="2025-09-08T23:46:21.014828292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:21.016216 containerd[1500]: time="2025-09-08T23:46:21.016182903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.348109096s" Sep 8 23:46:21.016315 containerd[1500]: time="2025-09-08T23:46:21.016299305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 8 23:46:21.016988 containerd[1500]: time="2025-09-08T23:46:21.016938711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:46:21.495325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060252439.mount: Deactivated successfully. Sep 8 23:46:21.503218 containerd[1500]: time="2025-09-08T23:46:21.503155666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:21.504409 containerd[1500]: time="2025-09-08T23:46:21.504375558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:46:21.505762 containerd[1500]: time="2025-09-08T23:46:21.505395995Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:21.509021 containerd[1500]: time="2025-09-08T23:46:21.508981033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:21.509816 containerd[1500]: time="2025-09-08T23:46:21.509595508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 492.194678ms" Sep 8 23:46:21.509816 containerd[1500]: time="2025-09-08T23:46:21.509632305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:46:21.510101 containerd[1500]: time="2025-09-08T23:46:21.510080314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 8 23:46:22.014177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084636558.mount: Deactivated successfully. Sep 8 23:46:23.362828 containerd[1500]: time="2025-09-08T23:46:23.362756336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:23.363432 containerd[1500]: time="2025-09-08T23:46:23.363383052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 8 23:46:23.364408 containerd[1500]: time="2025-09-08T23:46:23.364343058Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:23.367009 containerd[1500]: time="2025-09-08T23:46:23.366955530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:23.368245 containerd[1500]: time="2025-09-08T23:46:23.368209082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.858098551s" Sep 8 23:46:23.368327 containerd[1500]: time="2025-09-08T23:46:23.368249907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 8 23:46:27.933479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:27.933615 systemd[1]: kubelet.service: Consumed 154ms CPU time, 108M memory peak. Sep 8 23:46:27.935435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:27.958012 systemd[1]: Reload requested from client PID 2183 ('systemctl') (unit session-7.scope)... Sep 8 23:46:27.958029 systemd[1]: Reloading... Sep 8 23:46:28.025390 zram_generator::config[2225]: No configuration found. Sep 8 23:46:28.198923 systemd[1]: Reloading finished in 240 ms. Sep 8 23:46:28.266005 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 8 23:46:28.266091 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 8 23:46:28.266372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:28.266434 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Sep 8 23:46:28.268006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:28.405478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:28.410113 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:46:28.553347 kubelet[2270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:28.553347 kubelet[2270]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:46:28.553347 kubelet[2270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:28.554523 kubelet[2270]: I0908 23:46:28.553730 2270 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:46:28.963625 kubelet[2270]: I0908 23:46:28.963514 2270 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:46:28.963625 kubelet[2270]: I0908 23:46:28.963546 2270 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:46:28.964455 kubelet[2270]: I0908 23:46:28.964202 2270 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:46:28.989015 kubelet[2270]: E0908 23:46:28.988975 2270 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:46:28.990461 kubelet[2270]: I0908 23:46:28.990441 2270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:46:28.995504 kubelet[2270]: I0908 23:46:28.995480 2270 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:46:28.998641 kubelet[2270]: I0908 23:46:28.998616 2270 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:46:28.999809 kubelet[2270]: I0908 23:46:28.999754 2270 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:46:28.999964 kubelet[2270]: I0908 23:46:28.999795 2270 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:46:29.000061 kubelet[2270]: I0908 23:46:29.000026 2270 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:46:29.000061 kubelet[2270]: I0908 23:46:29.000036 2270 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:46:29.000253 kubelet[2270]: I0908 23:46:29.000226 2270 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:29.002544 kubelet[2270]: I0908 23:46:29.002521 2270 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:46:29.002577 kubelet[2270]: I0908 23:46:29.002546 2270 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:46:29.002577 kubelet[2270]: I0908 23:46:29.002570 2270 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:46:29.002620 kubelet[2270]: I0908 23:46:29.002580 2270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:46:29.004002 kubelet[2270]: W0908 23:46:29.003906 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 8 23:46:29.004002 kubelet[2270]: E0908 23:46:29.003966 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:46:29.005194 kubelet[2270]: W0908 23:46:29.005161 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 8 23:46:29.005295 kubelet[2270]: E0908 23:46:29.005279 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:46:29.005468 kubelet[2270]: I0908 23:46:29.005447 2270 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 8 23:46:29.006077 kubelet[2270]: I0908 23:46:29.006047 2270 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:46:29.006190 kubelet[2270]: W0908 23:46:29.006170 2270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:46:29.006993 kubelet[2270]: I0908 23:46:29.006979 2270 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:46:29.007028 kubelet[2270]: I0908 23:46:29.007015 2270 server.go:1287] "Started kubelet" Sep 8 23:46:29.011527 kubelet[2270]: I0908 23:46:29.011496 2270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:46:29.012383 kubelet[2270]: I0908 23:46:29.011974 2270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:46:29.012383 kubelet[2270]: I0908 23:46:29.012232 2270 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:46:29.012383 kubelet[2270]: I0908 23:46:29.012299 2270 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:46:29.013713 kubelet[2270]: I0908 23:46:29.013684 2270 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:46:29.015956 kubelet[2270]: E0908 23:46:29.015705 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863736c65d8c792 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:46:29.006993298 +0000 UTC m=+0.593755144,LastTimestamp:2025-09-08 23:46:29.006993298 +0000 UTC m=+0.593755144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:46:29.016203 kubelet[2270]: I0908 23:46:29.016187 2270 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:46:29.016479 kubelet[2270]: I0908 23:46:29.016454 2270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:46:29.017083 kubelet[2270]: E0908 23:46:29.017059 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:46:29.017704 kubelet[2270]: I0908 23:46:29.017681 2270 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:46:29.017875 kubelet[2270]: I0908 23:46:29.017842 2270 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:46:29.020027 kubelet[2270]: I0908 23:46:29.019996 2270 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:46:29.020027 kubelet[2270]: W0908 23:46:29.020004 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 8 23:46:29.020116 kubelet[2270]: E0908 23:46:29.020050 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:46:29.020116 kubelet[2270]: I0908 23:46:29.020080 2270 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:46:29.020280 kubelet[2270]: E0908 23:46:29.019994 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Sep 8 23:46:29.021427 kubelet[2270]: I0908 23:46:29.020899 2270 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:46:29.021555 kubelet[2270]: E0908 23:46:29.021536 2270 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:46:29.029585 kubelet[2270]: I0908 23:46:29.029534 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:46:29.030807 kubelet[2270]: I0908 23:46:29.030776 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:46:29.030807 kubelet[2270]: I0908 23:46:29.030803 2270 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:46:29.030890 kubelet[2270]: I0908 23:46:29.030827 2270 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:46:29.030890 kubelet[2270]: I0908 23:46:29.030834 2270 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:46:29.030890 kubelet[2270]: E0908 23:46:29.030874 2270 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:46:29.034141 kubelet[2270]: W0908 23:46:29.034037 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 8 23:46:29.034141 kubelet[2270]: E0908 23:46:29.034091 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:46:29.034428 kubelet[2270]: I0908 23:46:29.034409 2270 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:46:29.034428 kubelet[2270]: I0908 23:46:29.034427 2270 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:46:29.034496 kubelet[2270]: I0908 23:46:29.034444 2270 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:29.048371 kubelet[2270]: I0908 23:46:29.048336 2270 policy_none.go:49] "None policy: Start" Sep 8 23:46:29.048371 kubelet[2270]: I0908 23:46:29.048377 2270 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:46:29.048481 kubelet[2270]: I0908 23:46:29.048390 2270 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:46:29.053908 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:46:29.067605 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:46:29.071086 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:46:29.081588 kubelet[2270]: I0908 23:46:29.081415 2270 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:46:29.081665 kubelet[2270]: I0908 23:46:29.081624 2270 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:46:29.081790 kubelet[2270]: I0908 23:46:29.081758 2270 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:46:29.082021 kubelet[2270]: I0908 23:46:29.082004 2270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:46:29.083079 kubelet[2270]: E0908 23:46:29.083059 2270 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:46:29.083267 kubelet[2270]: E0908 23:46:29.083221 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:46:29.139073 systemd[1]: Created slice kubepods-burstable-poda1cc99704c4fbe4ce1f9fdba6e4da442.slice - libcontainer container kubepods-burstable-poda1cc99704c4fbe4ce1f9fdba6e4da442.slice. Sep 8 23:46:29.146102 kubelet[2270]: E0908 23:46:29.146073 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:29.149556 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 8 23:46:29.166699 kubelet[2270]: E0908 23:46:29.166659 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:29.169839 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 8 23:46:29.171460 kubelet[2270]: E0908 23:46:29.171437 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:29.183472 kubelet[2270]: I0908 23:46:29.183417 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:29.183988 kubelet[2270]: E0908 23:46:29.183957 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 8 23:46:29.221751 kubelet[2270]: E0908 23:46:29.221639 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Sep 8 23:46:29.319298 kubelet[2270]: I0908 23:46:29.319262 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:29.319298 kubelet[2270]: I0908 23:46:29.319302 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:29.319616 kubelet[2270]: I0908 23:46:29.319325 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:29.319616 kubelet[2270]: I0908 23:46:29.319343 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:29.319616 kubelet[2270]: I0908 23:46:29.319387 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:29.319616 kubelet[2270]: I0908 23:46:29.319406 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:29.319616 kubelet[2270]: I0908 23:46:29.319420 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:29.319716 kubelet[2270]: I0908 23:46:29.319434 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:29.319716 kubelet[2270]: I0908 23:46:29.319477 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:29.385533 kubelet[2270]: I0908 23:46:29.385501 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:29.385851 kubelet[2270]: E0908 23:46:29.385829 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 8 23:46:29.447662 kubelet[2270]: E0908 23:46:29.447613 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.448254 containerd[1500]: time="2025-09-08T23:46:29.448219376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1cc99704c4fbe4ce1f9fdba6e4da442,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:29.467518 kubelet[2270]: E0908 23:46:29.467491 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.467896 containerd[1500]: time="2025-09-08T23:46:29.467866282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:29.472294 kubelet[2270]: E0908 23:46:29.472124 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.472486 containerd[1500]: time="2025-09-08T23:46:29.472431861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:29.534334 containerd[1500]: time="2025-09-08T23:46:29.534267327Z" level=info msg="connecting to shim cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639" address="unix:///run/containerd/s/31da17d2828a473f89d10497e434f7940ec2f03651576b434b4d1d0422352b1e" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:29.536343 containerd[1500]: time="2025-09-08T23:46:29.536289851Z" level=info msg="connecting to shim 042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d" address="unix:///run/containerd/s/b63097412d828c96a81c3a6cea04993e4f6e735a9dfa32e2837212be2d804499" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:29.545385 containerd[1500]: time="2025-09-08T23:46:29.544930019Z" level=info msg="connecting to shim dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca" address="unix:///run/containerd/s/398845d04b1c108bbc6c8db0dd5ccf16b8e6ea26da49523c2afe829b3b87aa28" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:29.559575 systemd[1]: Started cri-containerd-042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d.scope - libcontainer container 042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d. Sep 8 23:46:29.562398 systemd[1]: Started cri-containerd-cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639.scope - libcontainer container cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639. Sep 8 23:46:29.578565 systemd[1]: Started cri-containerd-dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca.scope - libcontainer container dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca. Sep 8 23:46:29.609599 containerd[1500]: time="2025-09-08T23:46:29.609506561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1cc99704c4fbe4ce1f9fdba6e4da442,Namespace:kube-system,Attempt:0,} returns sandbox id \"042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d\"" Sep 8 23:46:29.613325 kubelet[2270]: E0908 23:46:29.613193 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.613885 containerd[1500]: time="2025-09-08T23:46:29.613611292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639\"" Sep 8 23:46:29.614946 kubelet[2270]: E0908 23:46:29.614753 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.617790 containerd[1500]: time="2025-09-08T23:46:29.617736437Z" level=info msg="CreateContainer within sandbox \"042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:46:29.618053 containerd[1500]: time="2025-09-08T23:46:29.617758773Z" level=info msg="CreateContainer within sandbox \"cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:46:29.622287 kubelet[2270]: E0908 23:46:29.622253 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Sep 8 23:46:29.625327 containerd[1500]: time="2025-09-08T23:46:29.625296954Z" level=info msg="Container 6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:46:29.627289 containerd[1500]: time="2025-09-08T23:46:29.627248347Z" level=info msg="Container aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:46:29.627701 containerd[1500]: time="2025-09-08T23:46:29.627613768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca\"" Sep 8 23:46:29.628481 kubelet[2270]: E0908 23:46:29.628353 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:29.630171 containerd[1500]: time="2025-09-08T23:46:29.630138531Z" level=info msg="CreateContainer within sandbox \"dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:46:29.632860 containerd[1500]: time="2025-09-08T23:46:29.632817363Z" level=info msg="CreateContainer within sandbox \"cb8680435961e55e5285f869d2bfece556b096c668efbef0593538de759a3639\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9\"" Sep 8 23:46:29.633548 containerd[1500]: time="2025-09-08T23:46:29.633519704Z" level=info msg="StartContainer for \"6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9\"" Sep 8 23:46:29.634817 containerd[1500]: time="2025-09-08T23:46:29.634788811Z" level=info msg="connecting to shim 6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9" address="unix:///run/containerd/s/31da17d2828a473f89d10497e434f7940ec2f03651576b434b4d1d0422352b1e" protocol=ttrpc version=3 Sep 8 23:46:29.637160 containerd[1500]: time="2025-09-08T23:46:29.637118834Z" level=info msg="CreateContainer within sandbox \"042451c3eace7982166fed042c6e86afdcc1508249604278f4065902cf35e46d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8\"" Sep 8 23:46:29.638665 containerd[1500]: time="2025-09-08T23:46:29.638637638Z" level=info msg="StartContainer for \"aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8\"" Sep 8 23:46:29.639946 containerd[1500]: time="2025-09-08T23:46:29.639630787Z" level=info msg="connecting to shim aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8" address="unix:///run/containerd/s/b63097412d828c96a81c3a6cea04993e4f6e735a9dfa32e2837212be2d804499" protocol=ttrpc version=3 Sep 8 23:46:29.640565 containerd[1500]: time="2025-09-08T23:46:29.640529949Z" level=info msg="Container 4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:46:29.648038 containerd[1500]: time="2025-09-08T23:46:29.647985031Z" level=info msg="CreateContainer within sandbox \"dec5f6b217ff91e8558f758f3b7640c64b928a969c73c056bb72df52a8aa54ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8\"" Sep 8 23:46:29.648763 containerd[1500]: time="2025-09-08T23:46:29.648736088Z" level=info msg="StartContainer for \"4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8\"" Sep 8 23:46:29.650243 containerd[1500]: time="2025-09-08T23:46:29.650149096Z" level=info msg="connecting to shim 4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8" address="unix:///run/containerd/s/398845d04b1c108bbc6c8db0dd5ccf16b8e6ea26da49523c2afe829b3b87aa28" protocol=ttrpc version=3 Sep 8 23:46:29.660513 systemd[1]: Started cri-containerd-6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9.scope - libcontainer container 6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9. Sep 8 23:46:29.664045 systemd[1]: Started cri-containerd-aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8.scope - libcontainer container aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8. Sep 8 23:46:29.676654 systemd[1]: Started cri-containerd-4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8.scope - libcontainer container 4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8. Sep 8 23:46:29.721988 containerd[1500]: time="2025-09-08T23:46:29.721910608Z" level=info msg="StartContainer for \"aa8e464d2656410bc55c5072878a90e766384afb60cfd3a721929b8d44e1cda8\" returns successfully" Sep 8 23:46:29.722353 containerd[1500]: time="2025-09-08T23:46:29.722245567Z" level=info msg="StartContainer for \"6b0a3e450167112696c59f32ee1266e7efb852c2400d846886173f2fc0340ee9\" returns successfully" Sep 8 23:46:29.726079 containerd[1500]: time="2025-09-08T23:46:29.726016379Z" level=info msg="StartContainer for \"4cde15673ecead69ed73d94e588a536aefbd31bd4bd5c397ad6e7e413cf5d6a8\" returns successfully" Sep 8 23:46:29.787790 kubelet[2270]: I0908 23:46:29.787745 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:29.788110 kubelet[2270]: E0908 23:46:29.788082 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 8 23:46:30.039071 kubelet[2270]: E0908 23:46:30.038962 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:30.039159 kubelet[2270]: E0908 23:46:30.039094 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:30.041841 kubelet[2270]: E0908 23:46:30.041817 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:30.041959 kubelet[2270]: E0908 23:46:30.041944 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:30.043756 kubelet[2270]: E0908 23:46:30.043735 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:30.043867 kubelet[2270]: E0908 23:46:30.043835 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:30.590311 kubelet[2270]: I0908 23:46:30.590278 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:31.048564 kubelet[2270]: E0908 23:46:31.048233 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:31.048564 kubelet[2270]: E0908 23:46:31.048382 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:31.048886 kubelet[2270]: E0908 23:46:31.048606 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:46:31.048886 kubelet[2270]: E0908 23:46:31.048716 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:31.288392 kubelet[2270]: E0908 23:46:31.288339 2270 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:46:31.360887 kubelet[2270]: I0908 23:46:31.360141 2270 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:46:31.418374 kubelet[2270]: I0908 23:46:31.418305 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:31.432157 kubelet[2270]: E0908 23:46:31.432104 2270 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:31.432157 kubelet[2270]: I0908 23:46:31.432154 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:31.435642 kubelet[2270]: E0908 23:46:31.434160 2270 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:31.435642 kubelet[2270]: I0908 23:46:31.434192 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:31.437146 kubelet[2270]: E0908 23:46:31.437113 2270 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:32.005358 kubelet[2270]: I0908 23:46:32.005317 2270 apiserver.go:52] "Watching apiserver" Sep 8 23:46:32.017964 kubelet[2270]: I0908 23:46:32.017916 2270 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:46:33.475247 systemd[1]: Reload requested from client PID 2553 ('systemctl') (unit session-7.scope)... Sep 8 23:46:33.475265 systemd[1]: Reloading... Sep 8 23:46:33.539455 zram_generator::config[2598]: No configuration found. Sep 8 23:46:33.710350 systemd[1]: Reloading finished in 234 ms. Sep 8 23:46:33.739149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:33.754762 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:46:33.755067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:33.755131 systemd[1]: kubelet.service: Consumed 869ms CPU time, 127.9M memory peak. Sep 8 23:46:33.757020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:33.906954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:33.911510 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:46:33.958301 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:33.958301 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:46:33.958301 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:33.958713 kubelet[2638]: I0908 23:46:33.958415 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:46:33.970416 kubelet[2638]: I0908 23:46:33.970356 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:46:33.970416 kubelet[2638]: I0908 23:46:33.970399 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:46:33.970731 kubelet[2638]: I0908 23:46:33.970713 2638 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:46:33.972142 kubelet[2638]: I0908 23:46:33.972113 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:46:33.974776 kubelet[2638]: I0908 23:46:33.974627 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:46:33.978421 kubelet[2638]: I0908 23:46:33.978376 2638 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:46:33.981152 kubelet[2638]: I0908 23:46:33.981130 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:46:33.981362 kubelet[2638]: I0908 23:46:33.981319 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:46:33.981556 kubelet[2638]: I0908 23:46:33.981351 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:46:33.981642 kubelet[2638]: I0908 23:46:33.981560 2638 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:46:33.981642 kubelet[2638]: I0908 23:46:33.981568 2638 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:46:33.981642 kubelet[2638]: I0908 23:46:33.981613 2638 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:33.981864 kubelet[2638]: I0908 23:46:33.981742 2638 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:46:33.981864 kubelet[2638]: I0908 23:46:33.981758 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:46:33.981864 kubelet[2638]: I0908 23:46:33.981784 2638 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:46:33.981864 kubelet[2638]: I0908 23:46:33.981795 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:46:33.984109 kubelet[2638]: I0908 23:46:33.984081 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 8 23:46:33.985624 kubelet[2638]: I0908 23:46:33.984771 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:46:33.985624 kubelet[2638]: I0908 23:46:33.985330 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:46:33.985624 kubelet[2638]: I0908 23:46:33.985380 2638 server.go:1287] "Started kubelet" Sep 8 23:46:33.986088 kubelet[2638]: I0908 23:46:33.986052 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:46:33.986985 kubelet[2638]: I0908 23:46:33.986951 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:46:33.987408 kubelet[2638]: I0908 23:46:33.987388 2638 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:46:33.989230 kubelet[2638]: I0908 23:46:33.989175 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:46:33.989565 kubelet[2638]: I0908 23:46:33.989502 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:46:33.992086 kubelet[2638]: I0908 23:46:33.992053 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:46:33.992221 kubelet[2638]: I0908 23:46:33.992204 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:46:33.992529 kubelet[2638]: I0908 23:46:33.992502 2638 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:46:33.992878 kubelet[2638]: E0908 23:46:33.992054 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:46:33.994428 kubelet[2638]: I0908 23:46:33.993070 2638 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:46:34.003376 kubelet[2638]: I0908 23:46:34.001269 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:46:34.003492 kubelet[2638]: I0908 23:46:34.003380 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:46:34.003492 kubelet[2638]: I0908 23:46:34.003452 2638 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:46:34.003492 kubelet[2638]: I0908 23:46:34.003476 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:46:34.003492 kubelet[2638]: I0908 23:46:34.003482 2638 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:46:34.003580 kubelet[2638]: E0908 23:46:34.003528 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:46:34.003957 kubelet[2638]: I0908 23:46:34.003931 2638 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:46:34.004160 kubelet[2638]: I0908 23:46:34.004136 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:46:34.012826 kubelet[2638]: I0908 23:46:34.012784 2638 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:46:34.048750 kubelet[2638]: I0908 23:46:34.048718 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:46:34.048889 kubelet[2638]: I0908 23:46:34.048763 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:46:34.048889 kubelet[2638]: I0908 23:46:34.048787 2638 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:34.049077 kubelet[2638]: I0908 23:46:34.049057 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:46:34.049112 kubelet[2638]: I0908 23:46:34.049078 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:46:34.049112 kubelet[2638]: I0908 23:46:34.049098 2638 policy_none.go:49] "None policy: Start" Sep 8 23:46:34.049112 kubelet[2638]: I0908 23:46:34.049107 2638 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:46:34.049187 kubelet[2638]: I0908 23:46:34.049119 2638 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:46:34.049235 kubelet[2638]: I0908 23:46:34.049223 2638 state_mem.go:75] "Updated machine memory state" Sep 8 23:46:34.053313 kubelet[2638]: I0908 23:46:34.053212 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:46:34.053445 kubelet[2638]: I0908 23:46:34.053404 2638 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:46:34.053445 kubelet[2638]: I0908 23:46:34.053419 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:46:34.053648 kubelet[2638]: I0908 23:46:34.053626 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:46:34.054462 kubelet[2638]: E0908 23:46:34.054265 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:46:34.104132 kubelet[2638]: I0908 23:46:34.104095 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.104387 kubelet[2638]: I0908 23:46:34.104138 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:34.104465 kubelet[2638]: I0908 23:46:34.104141 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:34.157701 kubelet[2638]: I0908 23:46:34.157648 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:46:34.197756 kubelet[2638]: I0908 23:46:34.197709 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:34.197756 kubelet[2638]: I0908 23:46:34.197748 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197770 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1cc99704c4fbe4ce1f9fdba6e4da442-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1cc99704c4fbe4ce1f9fdba6e4da442\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197788 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197832 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197845 2638 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197898 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.197913 kubelet[2638]: I0908 23:46:34.197917 2638 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:46:34.198058 kubelet[2638]: I0908 23:46:34.197932 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.198058 kubelet[2638]: I0908 23:46:34.197953 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.198058 kubelet[2638]: I0908 23:46:34.197986 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:34.477935 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:46:34.478235 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:46:34.487457 kubelet[2638]: E0908 23:46:34.487393 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:34.487457 kubelet[2638]: E0908 23:46:34.487442 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:34.487622 kubelet[2638]: E0908 23:46:34.487584 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:34.807491 sudo[2673]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:34.982466 kubelet[2638]: I0908 23:46:34.982418 2638 apiserver.go:52] "Watching apiserver" Sep 8 23:46:34.992675 kubelet[2638]: I0908 23:46:34.992618 2638 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:46:35.034272 kubelet[2638]: I0908 23:46:35.034217 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:35.034658 kubelet[2638]: I0908 23:46:35.034614 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:35.034926 kubelet[2638]: I0908 23:46:35.034896 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.044234 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.044430 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.044237 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.044872 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.044989 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:35.045386 kubelet[2638]: E0908 23:46:35.045148 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:35.067129 kubelet[2638]: I0908 23:46:35.066869 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.066851784 podStartE2EDuration="1.066851784s" podCreationTimestamp="2025-09-08 23:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:35.066824176 +0000 UTC m=+1.152153854" watchObservedRunningTime="2025-09-08 23:46:35.066851784 +0000 UTC m=+1.152181422" Sep 8 23:46:35.067129 kubelet[2638]: I0908 23:46:35.067031 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.06699355 podStartE2EDuration="1.06699355s" podCreationTimestamp="2025-09-08 23:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:35.059184647 +0000 UTC m=+1.144514325" watchObservedRunningTime="2025-09-08 23:46:35.06699355 +0000 UTC m=+1.152323308" Sep 8 23:46:35.088456 kubelet[2638]: I0908 23:46:35.088336 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.088317145 podStartE2EDuration="1.088317145s" podCreationTimestamp="2025-09-08 23:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:35.078775566 +0000 UTC m=+1.164105244" watchObservedRunningTime="2025-09-08 23:46:35.088317145 +0000 UTC m=+1.173646823" Sep 8 23:46:36.036403 kubelet[2638]: E0908 23:46:36.036044 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:36.036403 kubelet[2638]: E0908 23:46:36.036102 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:36.037122 kubelet[2638]: E0908 23:46:36.037099 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:37.037599 kubelet[2638]: E0908 23:46:37.037149 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:37.037599 kubelet[2638]: E0908 23:46:37.037578 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:37.128512 sudo[1714]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:37.129611 sshd[1713]: Connection closed by 10.0.0.1 port 44208 Sep 8 23:46:37.130127 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:37.136728 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:44208.service: Deactivated successfully. Sep 8 23:46:37.139010 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:46:37.139319 systemd[1]: session-7.scope: Consumed 7.182s CPU time, 255.9M memory peak. Sep 8 23:46:37.140298 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:46:37.141355 systemd-logind[1482]: Removed session 7. Sep 8 23:46:38.040206 kubelet[2638]: E0908 23:46:38.040126 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:38.431986 kubelet[2638]: I0908 23:46:38.431880 2638 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:46:38.432542 containerd[1500]: time="2025-09-08T23:46:38.432193310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:46:38.433386 kubelet[2638]: I0908 23:46:38.432967 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:46:39.282834 kubelet[2638]: E0908 23:46:39.282793 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.397090 systemd[1]: Created slice kubepods-besteffort-pod8f4164ca_d176_410a_b4a1_0bbb7af8cbe5.slice - libcontainer container kubepods-besteffort-pod8f4164ca_d176_410a_b4a1_0bbb7af8cbe5.slice. Sep 8 23:46:39.412958 systemd[1]: Created slice kubepods-burstable-pod0def7320_fc04_4b65_a072_e0e6d156f58b.slice - libcontainer container kubepods-burstable-pod0def7320_fc04_4b65_a072_e0e6d156f58b.slice. Sep 8 23:46:39.439134 kubelet[2638]: I0908 23:46:39.439090 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f4164ca-d176-410a-b4a1-0bbb7af8cbe5-lib-modules\") pod \"kube-proxy-j4xrs\" (UID: \"8f4164ca-d176-410a-b4a1-0bbb7af8cbe5\") " pod="kube-system/kube-proxy-j4xrs" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439158 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cni-path\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439184 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-etc-cni-netd\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439206 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85lh\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-kube-api-access-f85lh\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439228 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-hostproc\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439247 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-kernel\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439291 kubelet[2638]: I0908 23:46:39.439285 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-bpf-maps\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439313 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0def7320-fc04-4b65-a072-e0e6d156f58b-clustermesh-secrets\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439345 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-run\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439375 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-cgroup\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439401 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-xtables-lock\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439418 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-hubble-tls\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439473 kubelet[2638]: I0908 23:46:39.439441 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f4164ca-d176-410a-b4a1-0bbb7af8cbe5-xtables-lock\") pod \"kube-proxy-j4xrs\" (UID: \"8f4164ca-d176-410a-b4a1-0bbb7af8cbe5\") " pod="kube-system/kube-proxy-j4xrs" Sep 8 23:46:39.439583 kubelet[2638]: I0908 23:46:39.439459 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99wk\" (UniqueName: \"kubernetes.io/projected/8f4164ca-d176-410a-b4a1-0bbb7af8cbe5-kube-api-access-h99wk\") pod \"kube-proxy-j4xrs\" (UID: \"8f4164ca-d176-410a-b4a1-0bbb7af8cbe5\") " pod="kube-system/kube-proxy-j4xrs" Sep 8 23:46:39.439583 kubelet[2638]: I0908 23:46:39.439476 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f4164ca-d176-410a-b4a1-0bbb7af8cbe5-kube-proxy\") pod \"kube-proxy-j4xrs\" (UID: \"8f4164ca-d176-410a-b4a1-0bbb7af8cbe5\") " pod="kube-system/kube-proxy-j4xrs" Sep 8 23:46:39.439583 kubelet[2638]: I0908 23:46:39.439492 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-config-path\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439583 kubelet[2638]: I0908 23:46:39.439508 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-net\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.439583 kubelet[2638]: I0908 23:46:39.439525 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-lib-modules\") pod \"cilium-bvlld\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " pod="kube-system/cilium-bvlld" Sep 8 23:46:39.494283 systemd[1]: Created slice kubepods-besteffort-poddeb9b826_dd37_4905_9c80_b248fedd5538.slice - libcontainer container kubepods-besteffort-poddeb9b826_dd37_4905_9c80_b248fedd5538.slice. Sep 8 23:46:39.540217 kubelet[2638]: I0908 23:46:39.540107 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb9b826-dd37-4905-9c80-b248fedd5538-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-22smx\" (UID: \"deb9b826-dd37-4905-9c80-b248fedd5538\") " pod="kube-system/cilium-operator-6c4d7847fc-22smx" Sep 8 23:46:39.540928 kubelet[2638]: I0908 23:46:39.540889 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8d89\" (UniqueName: \"kubernetes.io/projected/deb9b826-dd37-4905-9c80-b248fedd5538-kube-api-access-g8d89\") pod \"cilium-operator-6c4d7847fc-22smx\" (UID: \"deb9b826-dd37-4905-9c80-b248fedd5538\") " pod="kube-system/cilium-operator-6c4d7847fc-22smx" Sep 8 23:46:39.706807 kubelet[2638]: E0908 23:46:39.706765 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.707485 containerd[1500]: time="2025-09-08T23:46:39.707433393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4xrs,Uid:8f4164ca-d176-410a-b4a1-0bbb7af8cbe5,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:39.716884 kubelet[2638]: E0908 23:46:39.716846 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.717591 containerd[1500]: time="2025-09-08T23:46:39.717556654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvlld,Uid:0def7320-fc04-4b65-a072-e0e6d156f58b,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:39.776322 containerd[1500]: time="2025-09-08T23:46:39.775844802Z" level=info msg="connecting to shim 7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba" address="unix:///run/containerd/s/3a1dd4c8a0d450b1b1994da13aa41eda144bfc4900f84e9ee23c97721cc54b56" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:39.779406 containerd[1500]: time="2025-09-08T23:46:39.779348276Z" level=info msg="connecting to shim 26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:39.797588 systemd[1]: Started cri-containerd-7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba.scope - libcontainer container 7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba. Sep 8 23:46:39.799873 kubelet[2638]: E0908 23:46:39.799787 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.800209 systemd[1]: Started cri-containerd-26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be.scope - libcontainer container 26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be. Sep 8 23:46:39.802156 containerd[1500]: time="2025-09-08T23:46:39.800543551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-22smx,Uid:deb9b826-dd37-4905-9c80-b248fedd5538,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:39.835853 containerd[1500]: time="2025-09-08T23:46:39.835805971Z" level=info msg="connecting to shim 4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9" address="unix:///run/containerd/s/971288b97790ae392cf86d8edcafaeacb9affd594e14c2e22b54476711e74ca7" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:46:39.839430 containerd[1500]: time="2025-09-08T23:46:39.838768295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4xrs,Uid:8f4164ca-d176-410a-b4a1-0bbb7af8cbe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba\"" Sep 8 23:46:39.840463 kubelet[2638]: E0908 23:46:39.840023 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.844790 containerd[1500]: time="2025-09-08T23:46:39.844637290Z" level=info msg="CreateContainer within sandbox \"7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:46:39.848124 containerd[1500]: time="2025-09-08T23:46:39.848078710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvlld,Uid:0def7320-fc04-4b65-a072-e0e6d156f58b,Namespace:kube-system,Attempt:0,} returns sandbox id \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\"" Sep 8 23:46:39.857748 containerd[1500]: time="2025-09-08T23:46:39.857705351Z" level=info msg="Container 1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:46:39.858702 kubelet[2638]: E0908 23:46:39.858674 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.861207 containerd[1500]: time="2025-09-08T23:46:39.860969895Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:46:39.869479 containerd[1500]: time="2025-09-08T23:46:39.869425417Z" level=info msg="CreateContainer within sandbox \"7ae1ff39df3b37238c55c29ae6dbce16caea196f9038ea21a435f6c2068619ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6\"" Sep 8 23:46:39.870799 containerd[1500]: time="2025-09-08T23:46:39.870693515Z" level=info msg="StartContainer for \"1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6\"" Sep 8 23:46:39.872543 containerd[1500]: time="2025-09-08T23:46:39.872095160Z" level=info msg="connecting to shim 1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6" address="unix:///run/containerd/s/3a1dd4c8a0d450b1b1994da13aa41eda144bfc4900f84e9ee23c97721cc54b56" protocol=ttrpc version=3 Sep 8 23:46:39.874257 systemd[1]: Started cri-containerd-4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9.scope - libcontainer container 4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9. Sep 8 23:46:39.902586 systemd[1]: Started cri-containerd-1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6.scope - libcontainer container 1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6. Sep 8 23:46:39.932285 containerd[1500]: time="2025-09-08T23:46:39.932242287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-22smx,Uid:deb9b826-dd37-4905-9c80-b248fedd5538,Namespace:kube-system,Attempt:0,} returns sandbox id \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\"" Sep 8 23:46:39.932870 kubelet[2638]: E0908 23:46:39.932839 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:39.949460 containerd[1500]: time="2025-09-08T23:46:39.949383058Z" level=info msg="StartContainer for \"1c21a9ef63d0ef4ed1232b13f5dec6e5efc9d1b9af7fefadfcbd3885082e53d6\" returns successfully" Sep 8 23:46:40.046038 kubelet[2638]: E0908 23:46:40.045982 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:40.049577 kubelet[2638]: E0908 23:46:40.049091 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:40.069260 kubelet[2638]: I0908 23:46:40.069200 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4xrs" podStartSLOduration=1.069181807 podStartE2EDuration="1.069181807s" podCreationTimestamp="2025-09-08 23:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:46:40.060155131 +0000 UTC m=+6.145484809" watchObservedRunningTime="2025-09-08 23:46:40.069181807 +0000 UTC m=+6.154511445" Sep 8 23:46:40.105222 kubelet[2638]: E0908 23:46:40.105151 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:41.051315 kubelet[2638]: E0908 23:46:41.051269 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:41.051681 kubelet[2638]: E0908 23:46:41.051476 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:46.409955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516482186.mount: Deactivated successfully. Sep 8 23:46:47.492285 kubelet[2638]: E0908 23:46:47.492245 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:48.063905 kubelet[2638]: E0908 23:46:48.063873 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:46:52.547965 update_engine[1487]: I20250908 23:46:52.547436 1487 update_attempter.cc:509] Updating boot flags... Sep 8 23:47:03.425234 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:33114.service - OpenSSH per-connection server daemon (10.0.0.1:33114). Sep 8 23:47:03.473956 sshd[3060]: Accepted publickey for core from 10.0.0.1 port 33114 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:03.475597 sshd-session[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:03.481319 systemd-logind[1482]: New session 8 of user core. Sep 8 23:47:03.494575 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:47:03.630442 sshd[3063]: Connection closed by 10.0.0.1 port 33114 Sep 8 23:47:03.631177 sshd-session[3060]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:03.634816 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:33114.service: Deactivated successfully. Sep 8 23:47:03.636881 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:47:03.638529 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:47:03.640859 systemd-logind[1482]: Removed session 8. Sep 8 23:47:08.654533 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Sep 8 23:47:08.719862 sshd[3078]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:08.721250 sshd-session[3078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:08.726117 systemd-logind[1482]: New session 9 of user core. Sep 8 23:47:08.741596 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:47:08.863510 sshd[3081]: Connection closed by 10.0.0.1 port 33142 Sep 8 23:47:08.864222 sshd-session[3078]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:08.867656 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:47:08.867762 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:33142.service: Deactivated successfully. Sep 8 23:47:08.871234 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:47:08.873396 systemd-logind[1482]: Removed session 9. Sep 8 23:47:13.876337 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:37520.service - OpenSSH per-connection server daemon (10.0.0.1:37520). Sep 8 23:47:13.951913 sshd[3100]: Accepted publickey for core from 10.0.0.1 port 37520 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:13.953555 sshd-session[3100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:13.957337 systemd-logind[1482]: New session 10 of user core. Sep 8 23:47:13.965556 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:47:14.105457 sshd[3103]: Connection closed by 10.0.0.1 port 37520 Sep 8 23:47:14.106353 sshd-session[3100]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:14.112492 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:37520.service: Deactivated successfully. Sep 8 23:47:14.114941 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:47:14.117887 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:47:14.120551 systemd-logind[1482]: Removed session 10. Sep 8 23:47:19.119307 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:37564.service - OpenSSH per-connection server daemon (10.0.0.1:37564). Sep 8 23:47:19.184121 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 37564 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:19.185796 sshd-session[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:19.192145 systemd-logind[1482]: New session 11 of user core. Sep 8 23:47:19.199748 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:47:19.353020 sshd[3124]: Connection closed by 10.0.0.1 port 37564 Sep 8 23:47:19.353497 sshd-session[3121]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:19.358566 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:37564.service: Deactivated successfully. Sep 8 23:47:19.362485 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:47:19.363355 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:47:19.365864 systemd-logind[1482]: Removed session 11. Sep 8 23:47:19.939080 containerd[1500]: time="2025-09-08T23:47:19.939016776Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:19.940258 containerd[1500]: time="2025-09-08T23:47:19.940213327Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 8 23:47:19.941728 containerd[1500]: time="2025-09-08T23:47:19.941690885Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:19.943173 containerd[1500]: time="2025-09-08T23:47:19.943077201Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 40.082064259s" Sep 8 23:47:19.943173 containerd[1500]: time="2025-09-08T23:47:19.943110002Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 8 23:47:19.947717 containerd[1500]: time="2025-09-08T23:47:19.947570837Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:47:19.948774 containerd[1500]: time="2025-09-08T23:47:19.948741907Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:47:19.961396 containerd[1500]: time="2025-09-08T23:47:19.961160708Z" level=info msg="Container b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:19.964874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617524371.mount: Deactivated successfully. Sep 8 23:47:19.971276 containerd[1500]: time="2025-09-08T23:47:19.971148646Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\"" Sep 8 23:47:19.971906 containerd[1500]: time="2025-09-08T23:47:19.971636899Z" level=info msg="StartContainer for \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\"" Sep 8 23:47:19.973949 containerd[1500]: time="2025-09-08T23:47:19.973897438Z" level=info msg="connecting to shim b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" protocol=ttrpc version=3 Sep 8 23:47:20.021566 systemd[1]: Started cri-containerd-b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c.scope - libcontainer container b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c. Sep 8 23:47:20.055053 containerd[1500]: time="2025-09-08T23:47:20.055012700Z" level=info msg="StartContainer for \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" returns successfully" Sep 8 23:47:20.069845 systemd[1]: cri-containerd-b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c.scope: Deactivated successfully. Sep 8 23:47:20.088066 containerd[1500]: time="2025-09-08T23:47:20.088006892Z" level=info msg="received exit event container_id:\"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" id:\"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" pid:3153 exited_at:{seconds:1757375240 nanos:82417311}" Sep 8 23:47:20.088290 containerd[1500]: time="2025-09-08T23:47:20.088237178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" id:\"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" pid:3153 exited_at:{seconds:1757375240 nanos:82417311}" Sep 8 23:47:20.122321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c-rootfs.mount: Deactivated successfully. Sep 8 23:47:20.129010 kubelet[2638]: E0908 23:47:20.128970 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:21.133329 kubelet[2638]: E0908 23:47:21.133282 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:21.138082 containerd[1500]: time="2025-09-08T23:47:21.138039011Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:47:21.157458 containerd[1500]: time="2025-09-08T23:47:21.157407848Z" level=info msg="Container e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:21.162095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327727041.mount: Deactivated successfully. Sep 8 23:47:21.167709 containerd[1500]: time="2025-09-08T23:47:21.167665781Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\"" Sep 8 23:47:21.170148 containerd[1500]: time="2025-09-08T23:47:21.170110561Z" level=info msg="StartContainer for \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\"" Sep 8 23:47:21.171096 containerd[1500]: time="2025-09-08T23:47:21.171068185Z" level=info msg="connecting to shim e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" protocol=ttrpc version=3 Sep 8 23:47:21.194546 systemd[1]: Started cri-containerd-e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753.scope - libcontainer container e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753. Sep 8 23:47:21.231729 containerd[1500]: time="2025-09-08T23:47:21.231691357Z" level=info msg="StartContainer for \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" returns successfully" Sep 8 23:47:21.244044 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:47:21.244273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:21.244736 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:21.247638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:47:21.249234 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:47:21.252674 systemd[1]: cri-containerd-e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753.scope: Deactivated successfully. Sep 8 23:47:21.253089 containerd[1500]: time="2025-09-08T23:47:21.253057443Z" level=info msg="received exit event container_id:\"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" id:\"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" pid:3198 exited_at:{seconds:1757375241 nanos:252891319}" Sep 8 23:47:21.253479 containerd[1500]: time="2025-09-08T23:47:21.253449613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" id:\"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" pid:3198 exited_at:{seconds:1757375241 nanos:252891319}" Sep 8 23:47:21.267107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:47:22.138271 kubelet[2638]: E0908 23:47:22.138239 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:22.141166 containerd[1500]: time="2025-09-08T23:47:22.140798266Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:47:22.152942 containerd[1500]: time="2025-09-08T23:47:22.152877116Z" level=info msg="Container 3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:22.155746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753-rootfs.mount: Deactivated successfully. Sep 8 23:47:22.163488 containerd[1500]: time="2025-09-08T23:47:22.163447171Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\"" Sep 8 23:47:22.164082 containerd[1500]: time="2025-09-08T23:47:22.163872981Z" level=info msg="StartContainer for \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\"" Sep 8 23:47:22.171334 containerd[1500]: time="2025-09-08T23:47:22.171272159Z" level=info msg="connecting to shim 3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" protocol=ttrpc version=3 Sep 8 23:47:22.193530 systemd[1]: Started cri-containerd-3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80.scope - libcontainer container 3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80. Sep 8 23:47:22.227121 systemd[1]: cri-containerd-3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80.scope: Deactivated successfully. Sep 8 23:47:22.227882 containerd[1500]: time="2025-09-08T23:47:22.227839921Z" level=info msg="StartContainer for \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" returns successfully" Sep 8 23:47:22.237555 containerd[1500]: time="2025-09-08T23:47:22.237522394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" id:\"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" pid:3248 exited_at:{seconds:1757375242 nanos:237186305}" Sep 8 23:47:22.237660 containerd[1500]: time="2025-09-08T23:47:22.237604316Z" level=info msg="received exit event container_id:\"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" id:\"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" pid:3248 exited_at:{seconds:1757375242 nanos:237186305}" Sep 8 23:47:22.256611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80-rootfs.mount: Deactivated successfully. Sep 8 23:47:23.158432 kubelet[2638]: E0908 23:47:23.158392 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:23.168510 containerd[1500]: time="2025-09-08T23:47:23.167819296Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:47:23.185352 containerd[1500]: time="2025-09-08T23:47:23.184559730Z" level=info msg="Container 3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:23.185403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238332655.mount: Deactivated successfully. Sep 8 23:47:23.196587 containerd[1500]: time="2025-09-08T23:47:23.196538452Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\"" Sep 8 23:47:23.197075 containerd[1500]: time="2025-09-08T23:47:23.197035464Z" level=info msg="StartContainer for \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\"" Sep 8 23:47:23.198216 containerd[1500]: time="2025-09-08T23:47:23.198188891Z" level=info msg="connecting to shim 3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" protocol=ttrpc version=3 Sep 8 23:47:23.225571 systemd[1]: Started cri-containerd-3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad.scope - libcontainer container 3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad. Sep 8 23:47:23.249490 systemd[1]: cri-containerd-3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad.scope: Deactivated successfully. Sep 8 23:47:23.252186 containerd[1500]: time="2025-09-08T23:47:23.252129321Z" level=info msg="received exit event container_id:\"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" id:\"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" pid:3291 exited_at:{seconds:1757375243 nanos:250336238}" Sep 8 23:47:23.252262 containerd[1500]: time="2025-09-08T23:47:23.252204322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" id:\"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" pid:3291 exited_at:{seconds:1757375243 nanos:250336238}" Sep 8 23:47:23.259575 containerd[1500]: time="2025-09-08T23:47:23.259523655Z" level=info msg="StartContainer for \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" returns successfully" Sep 8 23:47:23.271877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad-rootfs.mount: Deactivated successfully. Sep 8 23:47:23.549589 containerd[1500]: time="2025-09-08T23:47:23.549534483Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:23.558298 containerd[1500]: time="2025-09-08T23:47:23.557728196Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 8 23:47:23.560567 containerd[1500]: time="2025-09-08T23:47:23.560531542Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:47:23.561827 containerd[1500]: time="2025-09-08T23:47:23.561798051Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.614193453s" Sep 8 23:47:23.561949 containerd[1500]: time="2025-09-08T23:47:23.561922414Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 8 23:47:23.564913 containerd[1500]: time="2025-09-08T23:47:23.564881164Z" level=info msg="CreateContainer within sandbox \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:47:23.571754 containerd[1500]: time="2025-09-08T23:47:23.571065990Z" level=info msg="Container c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:23.576261 containerd[1500]: time="2025-09-08T23:47:23.576225231Z" level=info msg="CreateContainer within sandbox \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\"" Sep 8 23:47:23.576620 containerd[1500]: time="2025-09-08T23:47:23.576594480Z" level=info msg="StartContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\"" Sep 8 23:47:23.579768 containerd[1500]: time="2025-09-08T23:47:23.579694793Z" level=info msg="connecting to shim c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518" address="unix:///run/containerd/s/971288b97790ae392cf86d8edcafaeacb9affd594e14c2e22b54476711e74ca7" protocol=ttrpc version=3 Sep 8 23:47:23.595543 systemd[1]: Started cri-containerd-c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518.scope - libcontainer container c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518. Sep 8 23:47:23.619762 containerd[1500]: time="2025-09-08T23:47:23.619723375Z" level=info msg="StartContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" returns successfully" Sep 8 23:47:24.165823 kubelet[2638]: E0908 23:47:24.165780 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:24.177572 kubelet[2638]: E0908 23:47:24.177537 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:24.184476 containerd[1500]: time="2025-09-08T23:47:24.184430701Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:47:24.198890 kubelet[2638]: I0908 23:47:24.198815 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-22smx" podStartSLOduration=1.5698793549999999 podStartE2EDuration="45.198797432s" podCreationTimestamp="2025-09-08 23:46:39 +0000 UTC" firstStartedPulling="2025-09-08 23:46:39.93397528 +0000 UTC m=+6.019304958" lastFinishedPulling="2025-09-08 23:47:23.562893357 +0000 UTC m=+49.648223035" observedRunningTime="2025-09-08 23:47:24.191873312 +0000 UTC m=+50.277202990" watchObservedRunningTime="2025-09-08 23:47:24.198797432 +0000 UTC m=+50.284127110" Sep 8 23:47:24.211388 containerd[1500]: time="2025-09-08T23:47:24.208937746Z" level=info msg="Container 12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:24.233182 containerd[1500]: time="2025-09-08T23:47:24.233119103Z" level=info msg="CreateContainer within sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\"" Sep 8 23:47:24.234714 containerd[1500]: time="2025-09-08T23:47:24.233715837Z" level=info msg="StartContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\"" Sep 8 23:47:24.239332 containerd[1500]: time="2025-09-08T23:47:24.237718529Z" level=info msg="connecting to shim 12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4" address="unix:///run/containerd/s/9cdb0b3c95f07035a727abc04eceb5ca73251bdadad5b6dcc33ae940019e94ab" protocol=ttrpc version=3 Sep 8 23:47:24.271596 systemd[1]: Started cri-containerd-12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4.scope - libcontainer container 12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4. Sep 8 23:47:24.312509 containerd[1500]: time="2025-09-08T23:47:24.312393931Z" level=info msg="StartContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" returns successfully" Sep 8 23:47:24.361892 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Sep 8 23:47:24.398009 containerd[1500]: time="2025-09-08T23:47:24.397973584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" id:\"fd012ad4376fb7cd83fe6e5241f4229e2d135312d3e9284e7447155084bc2e27\" pid:3399 exited_at:{seconds:1757375244 nanos:397449812}" Sep 8 23:47:24.401612 kubelet[2638]: I0908 23:47:24.401570 2638 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:47:24.441892 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:24.443149 sshd-session[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:24.448087 systemd-logind[1482]: New session 12 of user core. Sep 8 23:47:24.454610 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:47:24.575161 sshd[3431]: Connection closed by 10.0.0.1 port 39406 Sep 8 23:47:24.578355 sshd-session[3412]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:24.589384 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:39406.service: Deactivated successfully. Sep 8 23:47:24.592498 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:47:24.596442 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:47:24.603495 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:39408.service - OpenSSH per-connection server daemon (10.0.0.1:39408). Sep 8 23:47:24.605669 systemd-logind[1482]: Removed session 12. Sep 8 23:47:24.662990 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 39408 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:24.664306 sshd-session[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:24.668633 systemd-logind[1482]: New session 13 of user core. Sep 8 23:47:24.675522 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:47:24.841804 sshd[3466]: Connection closed by 10.0.0.1 port 39408 Sep 8 23:47:24.842826 sshd-session[3454]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:24.858620 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:39408.service: Deactivated successfully. Sep 8 23:47:24.860697 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:47:24.862400 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:47:24.866703 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:39418.service - OpenSSH per-connection server daemon (10.0.0.1:39418). Sep 8 23:47:24.868930 systemd-logind[1482]: Removed session 13. Sep 8 23:47:24.929313 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 39418 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:24.930667 sshd-session[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:24.935085 systemd-logind[1482]: New session 14 of user core. Sep 8 23:47:24.948599 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:47:25.078953 sshd[3507]: Connection closed by 10.0.0.1 port 39418 Sep 8 23:47:25.079418 sshd-session[3504]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:25.082810 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:39418.service: Deactivated successfully. Sep 8 23:47:25.084502 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:47:25.085142 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:47:25.086105 systemd-logind[1482]: Removed session 14. Sep 8 23:47:25.183901 kubelet[2638]: E0908 23:47:25.183798 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:25.185384 kubelet[2638]: E0908 23:47:25.184496 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:25.218766 kubelet[2638]: I0908 23:47:25.218669 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bvlld" podStartSLOduration=6.130531681 podStartE2EDuration="46.218612163s" podCreationTimestamp="2025-09-08 23:46:39 +0000 UTC" firstStartedPulling="2025-09-08 23:46:39.859271029 +0000 UTC m=+5.944600707" lastFinishedPulling="2025-09-08 23:47:19.947351511 +0000 UTC m=+46.032681189" observedRunningTime="2025-09-08 23:47:25.218382878 +0000 UTC m=+51.303712556" watchObservedRunningTime="2025-09-08 23:47:25.218612163 +0000 UTC m=+51.303941841" Sep 8 23:47:26.185929 kubelet[2638]: E0908 23:47:26.185868 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:27.159086 systemd-networkd[1434]: cilium_host: Link UP Sep 8 23:47:27.159195 systemd-networkd[1434]: cilium_net: Link UP Sep 8 23:47:27.159311 systemd-networkd[1434]: cilium_host: Gained carrier Sep 8 23:47:27.159436 systemd-networkd[1434]: cilium_net: Gained carrier Sep 8 23:47:27.189846 kubelet[2638]: E0908 23:47:27.189488 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:27.273991 systemd-networkd[1434]: cilium_vxlan: Link UP Sep 8 23:47:27.273997 systemd-networkd[1434]: cilium_vxlan: Gained carrier Sep 8 23:47:27.535383 kernel: NET: Registered PF_ALG protocol family Sep 8 23:47:27.773416 systemd-networkd[1434]: cilium_host: Gained IPv6LL Sep 8 23:47:27.836509 systemd-networkd[1434]: cilium_net: Gained IPv6LL Sep 8 23:47:28.087730 systemd-networkd[1434]: lxc_health: Link UP Sep 8 23:47:28.099075 systemd-networkd[1434]: lxc_health: Gained carrier Sep 8 23:47:28.191386 kubelet[2638]: E0908 23:47:28.191341 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:29.116683 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Sep 8 23:47:29.373500 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 8 23:47:29.722780 kubelet[2638]: E0908 23:47:29.721858 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:30.096420 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:33746.service - OpenSSH per-connection server daemon (10.0.0.1:33746). Sep 8 23:47:30.155868 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 33746 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:30.157583 sshd-session[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:30.161378 systemd-logind[1482]: New session 15 of user core. Sep 8 23:47:30.166513 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:47:30.194913 kubelet[2638]: E0908 23:47:30.194889 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:30.293281 sshd[3888]: Connection closed by 10.0.0.1 port 33746 Sep 8 23:47:30.293959 sshd-session[3885]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:30.299876 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:47:30.300022 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:33746.service: Deactivated successfully. Sep 8 23:47:30.302007 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:47:30.305334 systemd-logind[1482]: Removed session 15. Sep 8 23:47:35.309258 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:33756.service - OpenSSH per-connection server daemon (10.0.0.1:33756). Sep 8 23:47:35.388505 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 33756 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:35.390027 sshd-session[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:35.394183 systemd-logind[1482]: New session 16 of user core. Sep 8 23:47:35.408528 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:47:35.525058 sshd[3915]: Connection closed by 10.0.0.1 port 33756 Sep 8 23:47:35.526301 sshd-session[3912]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:35.539624 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:33756.service: Deactivated successfully. Sep 8 23:47:35.542696 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:47:35.543742 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:47:35.545823 systemd-logind[1482]: Removed session 16. Sep 8 23:47:35.547591 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Sep 8 23:47:35.612350 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:35.613755 sshd-session[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:35.618287 systemd-logind[1482]: New session 17 of user core. Sep 8 23:47:35.637559 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:47:35.821474 sshd[3932]: Connection closed by 10.0.0.1 port 33768 Sep 8 23:47:35.821700 sshd-session[3929]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:35.833222 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:33768.service: Deactivated successfully. Sep 8 23:47:35.835187 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:47:35.837162 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:47:35.839979 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:33784.service - OpenSSH per-connection server daemon (10.0.0.1:33784). Sep 8 23:47:35.840528 systemd-logind[1482]: Removed session 17. Sep 8 23:47:35.902577 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 33784 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:35.903798 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:35.907769 systemd-logind[1482]: New session 18 of user core. Sep 8 23:47:35.917562 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:47:36.498410 sshd[3947]: Connection closed by 10.0.0.1 port 33784 Sep 8 23:47:36.498562 sshd-session[3944]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:36.506800 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:33784.service: Deactivated successfully. Sep 8 23:47:36.510139 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:47:36.511562 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:47:36.515946 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). Sep 8 23:47:36.518390 systemd-logind[1482]: Removed session 18. Sep 8 23:47:36.575850 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:36.577156 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:36.581616 systemd-logind[1482]: New session 19 of user core. Sep 8 23:47:36.591515 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:47:36.808694 sshd[3972]: Connection closed by 10.0.0.1 port 33790 Sep 8 23:47:36.810356 sshd-session[3969]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:36.819342 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:33790.service: Deactivated successfully. Sep 8 23:47:36.821160 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:47:36.822041 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:47:36.826110 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:33804.service - OpenSSH per-connection server daemon (10.0.0.1:33804). Sep 8 23:47:36.827005 systemd-logind[1482]: Removed session 19. Sep 8 23:47:36.886940 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 33804 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:36.888330 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:36.892158 systemd-logind[1482]: New session 20 of user core. Sep 8 23:47:36.902546 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:47:37.010745 sshd[3987]: Connection closed by 10.0.0.1 port 33804 Sep 8 23:47:37.011090 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:37.015833 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:33804.service: Deactivated successfully. Sep 8 23:47:37.019091 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:47:37.020084 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:47:37.021076 systemd-logind[1482]: Removed session 20. Sep 8 23:47:42.033643 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:58044.service - OpenSSH per-connection server daemon (10.0.0.1:58044). Sep 8 23:47:42.092847 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 58044 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:42.094042 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:42.098700 systemd-logind[1482]: New session 21 of user core. Sep 8 23:47:42.107572 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:47:42.214456 sshd[4012]: Connection closed by 10.0.0.1 port 58044 Sep 8 23:47:42.215058 sshd-session[4009]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:42.219872 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:58044.service: Deactivated successfully. Sep 8 23:47:42.222045 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:47:42.222853 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:47:42.224040 systemd-logind[1482]: Removed session 21. Sep 8 23:47:47.231305 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:58054.service - OpenSSH per-connection server daemon (10.0.0.1:58054). Sep 8 23:47:47.317839 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:47.319325 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:47.327588 systemd-logind[1482]: New session 22 of user core. Sep 8 23:47:47.342597 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:47:47.468481 sshd[4028]: Connection closed by 10.0.0.1 port 58054 Sep 8 23:47:47.471537 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:47.475014 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:58054.service: Deactivated successfully. Sep 8 23:47:47.476693 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:47:47.477709 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:47:47.480038 systemd-logind[1482]: Removed session 22. Sep 8 23:47:52.004704 kubelet[2638]: E0908 23:47:52.004664 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:52.489647 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:50332.service - OpenSSH per-connection server daemon (10.0.0.1:50332). Sep 8 23:47:52.556214 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 50332 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:52.556040 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:52.560429 systemd-logind[1482]: New session 23 of user core. Sep 8 23:47:52.574543 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:47:52.696953 sshd[4045]: Connection closed by 10.0.0.1 port 50332 Sep 8 23:47:52.698319 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:52.705576 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:50332.service: Deactivated successfully. Sep 8 23:47:52.707688 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:47:52.709412 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:47:52.711092 systemd-logind[1482]: Removed session 23. Sep 8 23:47:52.712928 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Sep 8 23:47:52.778627 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:52.780867 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:52.786093 systemd-logind[1482]: New session 24 of user core. Sep 8 23:47:52.797535 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:47:53.004810 kubelet[2638]: E0908 23:47:53.004771 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:54.004518 kubelet[2638]: E0908 23:47:54.004479 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:54.951376 containerd[1500]: time="2025-09-08T23:47:54.951308435Z" level=info msg="StopContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" with timeout 30 (s)" Sep 8 23:47:54.952149 containerd[1500]: time="2025-09-08T23:47:54.951927069Z" level=info msg="Stop container \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" with signal terminated" Sep 8 23:47:54.970348 systemd[1]: cri-containerd-c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518.scope: Deactivated successfully. Sep 8 23:47:54.976847 containerd[1500]: time="2025-09-08T23:47:54.975346218Z" level=info msg="received exit event container_id:\"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" id:\"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" pid:3336 exited_at:{seconds:1757375274 nanos:971917849}" Sep 8 23:47:54.980044 containerd[1500]: time="2025-09-08T23:47:54.979881777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" id:\"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" pid:3336 exited_at:{seconds:1757375274 nanos:971917849}" Sep 8 23:47:54.988859 containerd[1500]: time="2025-09-08T23:47:54.988810177Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:47:54.993205 containerd[1500]: time="2025-09-08T23:47:54.993162218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" id:\"0c18f88aa43c3b9cc45bacdc021acd9085506b0322a2268996afdedb5d87c98a\" pid:4088 exited_at:{seconds:1757375274 nanos:992723222}" Sep 8 23:47:54.995911 containerd[1500]: time="2025-09-08T23:47:54.995877274Z" level=info msg="StopContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" with timeout 2 (s)" Sep 8 23:47:54.996508 containerd[1500]: time="2025-09-08T23:47:54.996411869Z" level=info msg="Stop container \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" with signal terminated" Sep 8 23:47:55.000258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518-rootfs.mount: Deactivated successfully. Sep 8 23:47:55.008130 systemd-networkd[1434]: lxc_health: Link DOWN Sep 8 23:47:55.008137 systemd-networkd[1434]: lxc_health: Lost carrier Sep 8 23:47:55.015215 containerd[1500]: time="2025-09-08T23:47:55.015178229Z" level=info msg="StopContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" returns successfully" Sep 8 23:47:55.016086 containerd[1500]: time="2025-09-08T23:47:55.016053102Z" level=info msg="StopPodSandbox for \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\"" Sep 8 23:47:55.021826 containerd[1500]: time="2025-09-08T23:47:55.021770734Z" level=info msg="Container to stop \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.023611 systemd[1]: cri-containerd-12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4.scope: Deactivated successfully. Sep 8 23:47:55.024184 systemd[1]: cri-containerd-12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4.scope: Consumed 5.920s CPU time, 119.9M memory peak, 132K read from disk, 12.9M written to disk. Sep 8 23:47:55.024630 containerd[1500]: time="2025-09-08T23:47:55.024297593Z" level=info msg="received exit event container_id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" pid:3369 exited_at:{seconds:1757375275 nanos:24127154}" Sep 8 23:47:55.025524 containerd[1500]: time="2025-09-08T23:47:55.025309384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" id:\"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" pid:3369 exited_at:{seconds:1757375275 nanos:24127154}" Sep 8 23:47:55.031946 systemd[1]: cri-containerd-4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9.scope: Deactivated successfully. Sep 8 23:47:55.038377 containerd[1500]: time="2025-09-08T23:47:55.038299916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" id:\"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" pid:2854 exit_status:137 exited_at:{seconds:1757375275 nanos:38044038}" Sep 8 23:47:55.061451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4-rootfs.mount: Deactivated successfully. Sep 8 23:47:55.072633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9-rootfs.mount: Deactivated successfully. Sep 8 23:47:55.074877 containerd[1500]: time="2025-09-08T23:47:55.074836011Z" level=info msg="shim disconnected" id=4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9 namespace=k8s.io Sep 8 23:47:55.074984 containerd[1500]: time="2025-09-08T23:47:55.074871931Z" level=warning msg="cleaning up after shim disconnected" id=4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9 namespace=k8s.io Sep 8 23:47:55.074984 containerd[1500]: time="2025-09-08T23:47:55.074903651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:55.076588 containerd[1500]: time="2025-09-08T23:47:55.076492598Z" level=info msg="StopContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" returns successfully" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077153872Z" level=info msg="StopPodSandbox for \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\"" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077223031Z" level=info msg="Container to stop \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077235031Z" level=info msg="Container to stop \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077243951Z" level=info msg="Container to stop \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077252351Z" level=info msg="Container to stop \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.077450 containerd[1500]: time="2025-09-08T23:47:55.077261111Z" level=info msg="Container to stop \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:47:55.083243 systemd[1]: cri-containerd-26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be.scope: Deactivated successfully. Sep 8 23:47:55.099608 containerd[1500]: time="2025-09-08T23:47:55.099552965Z" level=info msg="received exit event sandbox_id:\"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" exit_status:137 exited_at:{seconds:1757375275 nanos:38044038}" Sep 8 23:47:55.101963 containerd[1500]: time="2025-09-08T23:47:55.099905282Z" level=info msg="TearDown network for sandbox \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" successfully" Sep 8 23:47:55.101963 containerd[1500]: time="2025-09-08T23:47:55.099927442Z" level=info msg="StopPodSandbox for \"4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9\" returns successfully" Sep 8 23:47:55.101963 containerd[1500]: time="2025-09-08T23:47:55.100472958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" id:\"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" pid:2799 exit_status:137 exited_at:{seconds:1757375275 nanos:84485131}" Sep 8 23:47:55.101122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4470b57a6bee25bd4562ef924d2d2a6540fd175541974bf3427ec05b311458f9-shm.mount: Deactivated successfully. Sep 8 23:47:55.106743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be-rootfs.mount: Deactivated successfully. Sep 8 23:47:55.111784 containerd[1500]: time="2025-09-08T23:47:55.111717984Z" level=info msg="received exit event sandbox_id:\"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" exit_status:137 exited_at:{seconds:1757375275 nanos:84485131}" Sep 8 23:47:55.112740 containerd[1500]: time="2025-09-08T23:47:55.112607576Z" level=info msg="shim disconnected" id=26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be namespace=k8s.io Sep 8 23:47:55.112740 containerd[1500]: time="2025-09-08T23:47:55.112653456Z" level=warning msg="cleaning up after shim disconnected" id=26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be namespace=k8s.io Sep 8 23:47:55.112740 containerd[1500]: time="2025-09-08T23:47:55.112686136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:47:55.113513 containerd[1500]: time="2025-09-08T23:47:55.113477369Z" level=info msg="TearDown network for sandbox \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" successfully" Sep 8 23:47:55.115464 containerd[1500]: time="2025-09-08T23:47:55.115422873Z" level=info msg="StopPodSandbox for \"26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be\" returns successfully" Sep 8 23:47:55.246003 kubelet[2638]: I0908 23:47:55.245904 2638 scope.go:117] "RemoveContainer" containerID="c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518" Sep 8 23:47:55.248428 containerd[1500]: time="2025-09-08T23:47:55.248382684Z" level=info msg="RemoveContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\"" Sep 8 23:47:55.254261 containerd[1500]: time="2025-09-08T23:47:55.254224875Z" level=info msg="RemoveContainer for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" returns successfully" Sep 8 23:47:55.254545 kubelet[2638]: I0908 23:47:55.254515 2638 scope.go:117] "RemoveContainer" containerID="c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518" Sep 8 23:47:55.254785 containerd[1500]: time="2025-09-08T23:47:55.254751591Z" level=error msg="ContainerStatus for \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\": not found" Sep 8 23:47:55.254925 kubelet[2638]: E0908 23:47:55.254896 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\": not found" containerID="c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518" Sep 8 23:47:55.257070 kubelet[2638]: I0908 23:47:55.257046 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cni-path\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257116 kubelet[2638]: I0908 23:47:55.257079 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-bpf-maps\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257116 kubelet[2638]: I0908 23:47:55.257106 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-hubble-tls\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257175 kubelet[2638]: I0908 23:47:55.257126 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f85lh\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-kube-api-access-f85lh\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257175 kubelet[2638]: I0908 23:47:55.257147 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0def7320-fc04-4b65-a072-e0e6d156f58b-clustermesh-secrets\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257175 kubelet[2638]: I0908 23:47:55.257164 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-config-path\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257235 kubelet[2638]: I0908 23:47:55.257179 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-etc-cni-netd\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257235 kubelet[2638]: I0908 23:47:55.257195 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-net\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257235 kubelet[2638]: I0908 23:47:55.257219 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb9b826-dd37-4905-9c80-b248fedd5538-cilium-config-path\") pod \"deb9b826-dd37-4905-9c80-b248fedd5538\" (UID: \"deb9b826-dd37-4905-9c80-b248fedd5538\") " Sep 8 23:47:55.257295 kubelet[2638]: I0908 23:47:55.257234 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-lib-modules\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257295 kubelet[2638]: I0908 23:47:55.257252 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-run\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257295 kubelet[2638]: I0908 23:47:55.257270 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8d89\" (UniqueName: \"kubernetes.io/projected/deb9b826-dd37-4905-9c80-b248fedd5538-kube-api-access-g8d89\") pod \"deb9b826-dd37-4905-9c80-b248fedd5538\" (UID: \"deb9b826-dd37-4905-9c80-b248fedd5538\") " Sep 8 23:47:55.257295 kubelet[2638]: I0908 23:47:55.257290 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-hostproc\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257387 kubelet[2638]: I0908 23:47:55.257303 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-kernel\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257387 kubelet[2638]: I0908 23:47:55.257323 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-cgroup\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.257387 kubelet[2638]: I0908 23:47:55.257338 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-xtables-lock\") pod \"0def7320-fc04-4b65-a072-e0e6d156f58b\" (UID: \"0def7320-fc04-4b65-a072-e0e6d156f58b\") " Sep 8 23:47:55.270372 kubelet[2638]: I0908 23:47:55.257158 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cni-path" (OuterVolumeSpecName: "cni-path") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270372 kubelet[2638]: I0908 23:47:55.257373 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270372 kubelet[2638]: I0908 23:47:55.257418 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270372 kubelet[2638]: I0908 23:47:55.262326 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb9b826-dd37-4905-9c80-b248fedd5538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "deb9b826-dd37-4905-9c80-b248fedd5538" (UID: "deb9b826-dd37-4905-9c80-b248fedd5538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:47:55.270372 kubelet[2638]: I0908 23:47:55.262381 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270582 kubelet[2638]: I0908 23:47:55.262394 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270582 kubelet[2638]: I0908 23:47:55.270244 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.270582 kubelet[2638]: I0908 23:47:55.270277 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.271010 kubelet[2638]: I0908 23:47:55.270911 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518"} err="failed to get container status \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0649dd6b5f832aecc694f6bf1c383468dd2d863d0bb027edb28446562f0e518\": not found" Sep 8 23:47:55.271086 kubelet[2638]: I0908 23:47:55.271074 2638 scope.go:117] "RemoveContainer" containerID="12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4" Sep 8 23:47:55.271238 kubelet[2638]: I0908 23:47:55.270200 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-hostproc" (OuterVolumeSpecName: "hostproc") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.271341 kubelet[2638]: I0908 23:47:55.271327 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.271613 kubelet[2638]: I0908 23:47:55.271594 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:47:55.271725 kubelet[2638]: I0908 23:47:55.271708 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:47:55.271912 kubelet[2638]: I0908 23:47:55.271832 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-kube-api-access-f85lh" (OuterVolumeSpecName: "kube-api-access-f85lh") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "kube-api-access-f85lh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:47:55.272001 kubelet[2638]: I0908 23:47:55.271839 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb9b826-dd37-4905-9c80-b248fedd5538-kube-api-access-g8d89" (OuterVolumeSpecName: "kube-api-access-g8d89") pod "deb9b826-dd37-4905-9c80-b248fedd5538" (UID: "deb9b826-dd37-4905-9c80-b248fedd5538"). InnerVolumeSpecName "kube-api-access-g8d89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:47:55.272827 kubelet[2638]: I0908 23:47:55.272797 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:47:55.274483 containerd[1500]: time="2025-09-08T23:47:55.274449747Z" level=info msg="RemoveContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\"" Sep 8 23:47:55.274930 kubelet[2638]: I0908 23:47:55.274899 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0def7320-fc04-4b65-a072-e0e6d156f58b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0def7320-fc04-4b65-a072-e0e6d156f58b" (UID: "0def7320-fc04-4b65-a072-e0e6d156f58b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:47:55.279422 containerd[1500]: time="2025-09-08T23:47:55.279385505Z" level=info msg="RemoveContainer for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" returns successfully" Sep 8 23:47:55.279719 kubelet[2638]: I0908 23:47:55.279693 2638 scope.go:117] "RemoveContainer" containerID="3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad" Sep 8 23:47:55.281291 containerd[1500]: time="2025-09-08T23:47:55.281236370Z" level=info msg="RemoveContainer for \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\"" Sep 8 23:47:55.285166 containerd[1500]: time="2025-09-08T23:47:55.285129337Z" level=info msg="RemoveContainer for \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" returns successfully" Sep 8 23:47:55.285431 kubelet[2638]: I0908 23:47:55.285401 2638 scope.go:117] "RemoveContainer" containerID="3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80" Sep 8 23:47:55.290705 containerd[1500]: time="2025-09-08T23:47:55.290675171Z" level=info msg="RemoveContainer for \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\"" Sep 8 23:47:55.298434 containerd[1500]: time="2025-09-08T23:47:55.298358107Z" level=info msg="RemoveContainer for \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" returns successfully" Sep 8 23:47:55.298936 kubelet[2638]: I0908 23:47:55.298850 2638 scope.go:117] "RemoveContainer" containerID="e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753" Sep 8 23:47:55.300523 containerd[1500]: time="2025-09-08T23:47:55.300490769Z" level=info msg="RemoveContainer for \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\"" Sep 8 23:47:55.303799 containerd[1500]: time="2025-09-08T23:47:55.303759462Z" level=info msg="RemoveContainer for \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" returns successfully" Sep 8 23:47:55.303980 kubelet[2638]: I0908 23:47:55.303959 2638 scope.go:117] "RemoveContainer" containerID="b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c" Sep 8 23:47:55.305594 containerd[1500]: time="2025-09-08T23:47:55.305563847Z" level=info msg="RemoveContainer for \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\"" Sep 8 23:47:55.309129 containerd[1500]: time="2025-09-08T23:47:55.309077258Z" level=info msg="RemoveContainer for \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" returns successfully" Sep 8 23:47:55.309345 kubelet[2638]: I0908 23:47:55.309320 2638 scope.go:117] "RemoveContainer" containerID="12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4" Sep 8 23:47:55.309850 containerd[1500]: time="2025-09-08T23:47:55.309814452Z" level=error msg="ContainerStatus for \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\": not found" Sep 8 23:47:55.310042 kubelet[2638]: E0908 23:47:55.310014 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\": not found" containerID="12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4" Sep 8 23:47:55.310100 kubelet[2638]: I0908 23:47:55.310049 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4"} err="failed to get container status \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"12e54f22300f93a5c75da6305d199251d4c097141eb59fdd966d49af77e9fbd4\": not found" Sep 8 23:47:55.310100 kubelet[2638]: I0908 23:47:55.310072 2638 scope.go:117] "RemoveContainer" containerID="3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad" Sep 8 23:47:55.310331 containerd[1500]: time="2025-09-08T23:47:55.310286608Z" level=error msg="ContainerStatus for \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\": not found" Sep 8 23:47:55.310455 kubelet[2638]: E0908 23:47:55.310425 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\": not found" containerID="3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad" Sep 8 23:47:55.310499 kubelet[2638]: I0908 23:47:55.310455 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad"} err="failed to get container status \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e282a8b52d91094c4783bccd6a41e37e3ccd24e3f7b343e42031fb84407ebad\": not found" Sep 8 23:47:55.310499 kubelet[2638]: I0908 23:47:55.310473 2638 scope.go:117] "RemoveContainer" containerID="3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80" Sep 8 23:47:55.310702 containerd[1500]: time="2025-09-08T23:47:55.310670524Z" level=error msg="ContainerStatus for \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\": not found" Sep 8 23:47:55.310829 kubelet[2638]: E0908 23:47:55.310803 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\": not found" containerID="3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80" Sep 8 23:47:55.310860 kubelet[2638]: I0908 23:47:55.310831 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80"} err="failed to get container status \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e1390510624030f6bee1f3d8af92cdf75cbb85221dc672721af4c3d7065fb80\": not found" Sep 8 23:47:55.310860 kubelet[2638]: I0908 23:47:55.310846 2638 scope.go:117] "RemoveContainer" containerID="e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753" Sep 8 23:47:55.311014 containerd[1500]: time="2025-09-08T23:47:55.310983802Z" level=error msg="ContainerStatus for \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\": not found" Sep 8 23:47:55.311111 kubelet[2638]: E0908 23:47:55.311083 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\": not found" containerID="e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753" Sep 8 23:47:55.311180 kubelet[2638]: I0908 23:47:55.311116 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753"} err="failed to get container status \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8b2f5c811bb6f87fc5b3f1f4ab6cb7005ccfa30eca3ef015b59010a67450753\": not found" Sep 8 23:47:55.311180 kubelet[2638]: I0908 23:47:55.311132 2638 scope.go:117] "RemoveContainer" containerID="b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c" Sep 8 23:47:55.311322 containerd[1500]: time="2025-09-08T23:47:55.311296159Z" level=error msg="ContainerStatus for \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\": not found" Sep 8 23:47:55.311474 kubelet[2638]: E0908 23:47:55.311442 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\": not found" containerID="b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c" Sep 8 23:47:55.311513 kubelet[2638]: I0908 23:47:55.311469 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c"} err="failed to get container status \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3b96bcf77322c16dedb692515deccfefca0904fafc6fa8898284fcfbdfd239c\": not found" Sep 8 23:47:55.362601 kubelet[2638]: I0908 23:47:55.362563 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8d89\" (UniqueName: \"kubernetes.io/projected/deb9b826-dd37-4905-9c80-b248fedd5538-kube-api-access-g8d89\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362601 kubelet[2638]: I0908 23:47:55.362601 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362601 kubelet[2638]: I0908 23:47:55.362612 2638 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362620 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362630 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362638 2638 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362646 2638 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362653 2638 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362660 2638 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362669 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f85lh\" (UniqueName: \"kubernetes.io/projected/0def7320-fc04-4b65-a072-e0e6d156f58b-kube-api-access-f85lh\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362752 kubelet[2638]: I0908 23:47:55.362677 2638 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0def7320-fc04-4b65-a072-e0e6d156f58b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362906 kubelet[2638]: I0908 23:47:55.362685 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0def7320-fc04-4b65-a072-e0e6d156f58b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362906 kubelet[2638]: I0908 23:47:55.362693 2638 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362906 kubelet[2638]: I0908 23:47:55.362700 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362906 kubelet[2638]: I0908 23:47:55.362708 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb9b826-dd37-4905-9c80-b248fedd5538-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.362906 kubelet[2638]: I0908 23:47:55.362717 2638 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0def7320-fc04-4b65-a072-e0e6d156f58b-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:47:55.547548 systemd[1]: Removed slice kubepods-besteffort-poddeb9b826_dd37_4905_9c80_b248fedd5538.slice - libcontainer container kubepods-besteffort-poddeb9b826_dd37_4905_9c80_b248fedd5538.slice. Sep 8 23:47:55.554912 systemd[1]: Removed slice kubepods-burstable-pod0def7320_fc04_4b65_a072_e0e6d156f58b.slice - libcontainer container kubepods-burstable-pod0def7320_fc04_4b65_a072_e0e6d156f58b.slice. Sep 8 23:47:55.556543 systemd[1]: kubepods-burstable-pod0def7320_fc04_4b65_a072_e0e6d156f58b.slice: Consumed 6.007s CPU time, 120.2M memory peak, 140K read from disk, 12.9M written to disk. Sep 8 23:47:55.999299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26d7fb3c6855dbadb5bad29741976e6347d8b19d20fa28d3dd6c60746a5ae8be-shm.mount: Deactivated successfully. Sep 8 23:47:55.999409 systemd[1]: var-lib-kubelet-pods-deb9b826\x2ddd37\x2d4905\x2d9c80\x2db248fedd5538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8d89.mount: Deactivated successfully. Sep 8 23:47:55.999473 systemd[1]: var-lib-kubelet-pods-0def7320\x2dfc04\x2d4b65\x2da072\x2de0e6d156f58b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df85lh.mount: Deactivated successfully. Sep 8 23:47:55.999519 systemd[1]: var-lib-kubelet-pods-0def7320\x2dfc04\x2d4b65\x2da072\x2de0e6d156f58b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:47:55.999563 systemd[1]: var-lib-kubelet-pods-0def7320\x2dfc04\x2d4b65\x2da072\x2de0e6d156f58b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:47:56.006544 kubelet[2638]: I0908 23:47:56.006510 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0def7320-fc04-4b65-a072-e0e6d156f58b" path="/var/lib/kubelet/pods/0def7320-fc04-4b65-a072-e0e6d156f58b/volumes" Sep 8 23:47:56.007061 kubelet[2638]: I0908 23:47:56.007033 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb9b826-dd37-4905-9c80-b248fedd5538" path="/var/lib/kubelet/pods/deb9b826-dd37-4905-9c80-b248fedd5538/volumes" Sep 8 23:47:56.869933 sshd[4061]: Connection closed by 10.0.0.1 port 50346 Sep 8 23:47:56.869605 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:56.880750 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:50346.service: Deactivated successfully. Sep 8 23:47:56.882522 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:47:56.882717 systemd[1]: session-24.scope: Consumed 1.442s CPU time, 26M memory peak. Sep 8 23:47:56.883225 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:47:56.885350 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:50362.service - OpenSSH per-connection server daemon (10.0.0.1:50362). Sep 8 23:47:56.886947 systemd-logind[1482]: Removed session 24. Sep 8 23:47:56.948570 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 50362 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:56.949923 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:56.954339 systemd-logind[1482]: New session 25 of user core. Sep 8 23:47:56.974521 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:47:58.737631 sshd[4216]: Connection closed by 10.0.0.1 port 50362 Sep 8 23:47:58.737555 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:58.746626 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:50362.service: Deactivated successfully. Sep 8 23:47:58.749277 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:47:58.750458 systemd[1]: session-25.scope: Consumed 1.680s CPU time, 26.1M memory peak. Sep 8 23:47:58.754752 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:47:58.760822 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:50374.service - OpenSSH per-connection server daemon (10.0.0.1:50374). Sep 8 23:47:58.762250 systemd-logind[1482]: Removed session 25. Sep 8 23:47:58.768031 kubelet[2638]: I0908 23:47:58.767995 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="deb9b826-dd37-4905-9c80-b248fedd5538" containerName="cilium-operator" Sep 8 23:47:58.768031 kubelet[2638]: I0908 23:47:58.768022 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="0def7320-fc04-4b65-a072-e0e6d156f58b" containerName="cilium-agent" Sep 8 23:47:58.782036 systemd[1]: Created slice kubepods-burstable-poddecec737_995c_406f_8e1b_5dac68ee4a93.slice - libcontainer container kubepods-burstable-poddecec737_995c_406f_8e1b_5dac68ee4a93.slice. Sep 8 23:47:58.814419 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 50374 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:58.815777 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:58.819981 systemd-logind[1482]: New session 26 of user core. Sep 8 23:47:58.834532 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:47:58.880636 kubelet[2638]: I0908 23:47:58.880595 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-hostproc\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.880636 kubelet[2638]: I0908 23:47:58.880642 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-xtables-lock\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.880766 kubelet[2638]: I0908 23:47:58.880678 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/decec737-995c-406f-8e1b-5dac68ee4a93-hubble-tls\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.880766 kubelet[2638]: I0908 23:47:58.880696 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-host-proc-sys-net\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.880766 kubelet[2638]: I0908 23:47:58.880715 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/decec737-995c-406f-8e1b-5dac68ee4a93-cilium-ipsec-secrets\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.880766 kubelet[2638]: I0908 23:47:58.880759 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-host-proc-sys-kernel\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880776 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz8gd\" (UniqueName: \"kubernetes.io/projected/decec737-995c-406f-8e1b-5dac68ee4a93-kube-api-access-rz8gd\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880795 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-cilium-cgroup\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880830 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-etc-cni-netd\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880849 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/decec737-995c-406f-8e1b-5dac68ee4a93-clustermesh-secrets\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880866 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-bpf-maps\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881009 kubelet[2638]: I0908 23:47:58.880895 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-lib-modules\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881167 kubelet[2638]: I0908 23:47:58.880917 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/decec737-995c-406f-8e1b-5dac68ee4a93-cilium-config-path\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881167 kubelet[2638]: I0908 23:47:58.880935 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-cilium-run\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.881167 kubelet[2638]: I0908 23:47:58.880977 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/decec737-995c-406f-8e1b-5dac68ee4a93-cni-path\") pod \"cilium-kppxx\" (UID: \"decec737-995c-406f-8e1b-5dac68ee4a93\") " pod="kube-system/cilium-kppxx" Sep 8 23:47:58.883351 sshd[4231]: Connection closed by 10.0.0.1 port 50374 Sep 8 23:47:58.883826 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Sep 8 23:47:58.897725 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:50374.service: Deactivated successfully. Sep 8 23:47:58.900826 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:47:58.902144 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:47:58.904153 systemd[1]: Started sshd@26-10.0.0.103:22-10.0.0.1:50376.service - OpenSSH per-connection server daemon (10.0.0.1:50376). Sep 8 23:47:58.905166 systemd-logind[1482]: Removed session 26. Sep 8 23:47:58.968979 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 50376 ssh2: RSA SHA256:HeCgiWNNKJuNyxF8eI797w9VfyFOv21mB1ET+U9TBic Sep 8 23:47:58.970345 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:47:58.974389 systemd-logind[1482]: New session 27 of user core. Sep 8 23:47:58.984563 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 8 23:47:59.077085 kubelet[2638]: E0908 23:47:59.077028 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:47:59.086574 kubelet[2638]: E0908 23:47:59.086541 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:59.087143 containerd[1500]: time="2025-09-08T23:47:59.087095947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kppxx,Uid:decec737-995c-406f-8e1b-5dac68ee4a93,Namespace:kube-system,Attempt:0,}" Sep 8 23:47:59.106962 containerd[1500]: time="2025-09-08T23:47:59.106866551Z" level=info msg="connecting to shim d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:47:59.133594 systemd[1]: Started cri-containerd-d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d.scope - libcontainer container d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d. Sep 8 23:47:59.154880 containerd[1500]: time="2025-09-08T23:47:59.154821268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kppxx,Uid:decec737-995c-406f-8e1b-5dac68ee4a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\"" Sep 8 23:47:59.155784 kubelet[2638]: E0908 23:47:59.155763 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:47:59.158602 containerd[1500]: time="2025-09-08T23:47:59.158561646Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:47:59.166314 containerd[1500]: time="2025-09-08T23:47:59.166256320Z" level=info msg="Container d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:47:59.172357 containerd[1500]: time="2025-09-08T23:47:59.172297924Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\"" Sep 8 23:47:59.172989 containerd[1500]: time="2025-09-08T23:47:59.172930921Z" level=info msg="StartContainer for \"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\"" Sep 8 23:47:59.174207 containerd[1500]: time="2025-09-08T23:47:59.174169873Z" level=info msg="connecting to shim d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" protocol=ttrpc version=3 Sep 8 23:47:59.194562 systemd[1]: Started cri-containerd-d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d.scope - libcontainer container d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d. Sep 8 23:47:59.221603 containerd[1500]: time="2025-09-08T23:47:59.221563274Z" level=info msg="StartContainer for \"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\" returns successfully" Sep 8 23:47:59.230177 systemd[1]: cri-containerd-d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d.scope: Deactivated successfully. Sep 8 23:47:59.231477 containerd[1500]: time="2025-09-08T23:47:59.231443695Z" level=info msg="received exit event container_id:\"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\" id:\"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\" pid:4311 exited_at:{seconds:1757375279 nanos:230983498}" Sep 8 23:47:59.231733 containerd[1500]: time="2025-09-08T23:47:59.231713614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\" id:\"d4c04bfe6df875c3e6b28495f9a056694d655c769f49baa1b1a1bd9fa8ee1b9d\" pid:4311 exited_at:{seconds:1757375279 nanos:230983498}" Sep 8 23:47:59.259863 kubelet[2638]: E0908 23:47:59.259831 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:00.263774 kubelet[2638]: E0908 23:48:00.262160 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:00.269905 containerd[1500]: time="2025-09-08T23:48:00.269178359Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:48:00.279101 containerd[1500]: time="2025-09-08T23:48:00.278956107Z" level=info msg="Container 85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:48:00.291101 containerd[1500]: time="2025-09-08T23:48:00.290909083Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\"" Sep 8 23:48:00.291777 containerd[1500]: time="2025-09-08T23:48:00.291681559Z" level=info msg="StartContainer for \"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\"" Sep 8 23:48:00.292598 containerd[1500]: time="2025-09-08T23:48:00.292567034Z" level=info msg="connecting to shim 85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" protocol=ttrpc version=3 Sep 8 23:48:00.318582 systemd[1]: Started cri-containerd-85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952.scope - libcontainer container 85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952. Sep 8 23:48:00.358645 containerd[1500]: time="2025-09-08T23:48:00.358476522Z" level=info msg="StartContainer for \"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\" returns successfully" Sep 8 23:48:00.362229 systemd[1]: cri-containerd-85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952.scope: Deactivated successfully. Sep 8 23:48:00.363988 containerd[1500]: time="2025-09-08T23:48:00.363858733Z" level=info msg="received exit event container_id:\"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\" id:\"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\" pid:4357 exited_at:{seconds:1757375280 nanos:363617455}" Sep 8 23:48:00.364212 containerd[1500]: time="2025-09-08T23:48:00.363976173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\" id:\"85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952\" pid:4357 exited_at:{seconds:1757375280 nanos:363617455}" Sep 8 23:48:00.381517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85525a0e741ec48d64a65c2f7fd1a6061cf7e657ff74c1332e399c1b26b87952-rootfs.mount: Deactivated successfully. Sep 8 23:48:01.266981 kubelet[2638]: E0908 23:48:01.266936 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:01.269467 containerd[1500]: time="2025-09-08T23:48:01.269420882Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:48:01.288413 containerd[1500]: time="2025-09-08T23:48:01.286956038Z" level=info msg="Container e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:48:01.301385 containerd[1500]: time="2025-09-08T23:48:01.299754537Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\"" Sep 8 23:48:01.302263 containerd[1500]: time="2025-09-08T23:48:01.302236485Z" level=info msg="StartContainer for \"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\"" Sep 8 23:48:01.305553 containerd[1500]: time="2025-09-08T23:48:01.305523989Z" level=info msg="connecting to shim e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" protocol=ttrpc version=3 Sep 8 23:48:01.334593 systemd[1]: Started cri-containerd-e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2.scope - libcontainer container e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2. Sep 8 23:48:01.409721 containerd[1500]: time="2025-09-08T23:48:01.409661089Z" level=info msg="StartContainer for \"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\" returns successfully" Sep 8 23:48:01.410611 systemd[1]: cri-containerd-e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2.scope: Deactivated successfully. Sep 8 23:48:01.412080 containerd[1500]: time="2025-09-08T23:48:01.411978558Z" level=info msg="received exit event container_id:\"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\" id:\"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\" pid:4403 exited_at:{seconds:1757375281 nanos:411540920}" Sep 8 23:48:01.412215 containerd[1500]: time="2025-09-08T23:48:01.412181117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\" id:\"e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2\" pid:4403 exited_at:{seconds:1757375281 nanos:411540920}" Sep 8 23:48:01.432018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e237aba9225fe9670e90b1e4704eb3e308f3922b766cac3e2e2cd2387856bce2-rootfs.mount: Deactivated successfully. Sep 8 23:48:02.271087 kubelet[2638]: E0908 23:48:02.270980 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:02.274516 containerd[1500]: time="2025-09-08T23:48:02.274001047Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:48:02.377399 containerd[1500]: time="2025-09-08T23:48:02.377267646Z" level=info msg="Container f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:48:02.381858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629044444.mount: Deactivated successfully. Sep 8 23:48:02.388937 containerd[1500]: time="2025-09-08T23:48:02.388127239Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\"" Sep 8 23:48:02.389412 containerd[1500]: time="2025-09-08T23:48:02.389380514Z" level=info msg="StartContainer for \"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\"" Sep 8 23:48:02.390425 containerd[1500]: time="2025-09-08T23:48:02.390388270Z" level=info msg="connecting to shim f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" protocol=ttrpc version=3 Sep 8 23:48:02.423560 systemd[1]: Started cri-containerd-f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f.scope - libcontainer container f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f. Sep 8 23:48:02.446171 systemd[1]: cri-containerd-f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f.scope: Deactivated successfully. Sep 8 23:48:02.446852 containerd[1500]: time="2025-09-08T23:48:02.446746589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\" id:\"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\" pid:4443 exited_at:{seconds:1757375282 nanos:446217271}" Sep 8 23:48:02.448100 containerd[1500]: time="2025-09-08T23:48:02.448067744Z" level=info msg="received exit event container_id:\"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\" id:\"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\" pid:4443 exited_at:{seconds:1757375282 nanos:446217271}" Sep 8 23:48:02.455865 containerd[1500]: time="2025-09-08T23:48:02.455755831Z" level=info msg="StartContainer for \"f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f\" returns successfully" Sep 8 23:48:02.467926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5d12d6bd7dde5e24dd9f5380049dba902a0fb6e2e32cdef1bc32593ff926d0f-rootfs.mount: Deactivated successfully. Sep 8 23:48:03.276773 kubelet[2638]: E0908 23:48:03.276729 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:03.281177 containerd[1500]: time="2025-09-08T23:48:03.281129369Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:48:03.340366 containerd[1500]: time="2025-09-08T23:48:03.340313907Z" level=info msg="Container ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:48:03.348207 containerd[1500]: time="2025-09-08T23:48:03.348167557Z" level=info msg="CreateContainer within sandbox \"d9c1d786d1a9add375c1f8337968a138d57adc38ade2e601d27d73117cd6547d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\"" Sep 8 23:48:03.348840 containerd[1500]: time="2025-09-08T23:48:03.348815555Z" level=info msg="StartContainer for \"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\"" Sep 8 23:48:03.349812 containerd[1500]: time="2025-09-08T23:48:03.349786791Z" level=info msg="connecting to shim ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e" address="unix:///run/containerd/s/5b32f7ce6c799aed1957d9cad23c37dfd7eb7d2f12c47c44b76f7c4655487213" protocol=ttrpc version=3 Sep 8 23:48:03.372527 systemd[1]: Started cri-containerd-ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e.scope - libcontainer container ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e. Sep 8 23:48:03.408291 containerd[1500]: time="2025-09-08T23:48:03.408248732Z" level=info msg="StartContainer for \"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" returns successfully" Sep 8 23:48:03.464412 containerd[1500]: time="2025-09-08T23:48:03.464341961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" id:\"58d4e791be7ab5d0462a63d0c092ce52bb77180982dd028bd0b63973dfbc4517\" pid:4510 exited_at:{seconds:1757375283 nanos:463936722}" Sep 8 23:48:03.674393 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 8 23:48:04.283074 kubelet[2638]: E0908 23:48:04.282174 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:05.284218 kubelet[2638]: E0908 23:48:05.284118 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:05.347222 containerd[1500]: time="2025-09-08T23:48:05.347123076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" id:\"d77c80cc0896c76c8f2ba46b4cd370e6e4da96a58a116d3c0cc084e8ceeec255\" pid:4669 exit_status:1 exited_at:{seconds:1757375285 nanos:346758797}" Sep 8 23:48:06.431597 systemd-networkd[1434]: lxc_health: Link UP Sep 8 23:48:06.438647 systemd-networkd[1434]: lxc_health: Gained carrier Sep 8 23:48:07.088647 kubelet[2638]: E0908 23:48:07.088613 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:07.108593 kubelet[2638]: I0908 23:48:07.108529 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kppxx" podStartSLOduration=9.108511937 podStartE2EDuration="9.108511937s" podCreationTimestamp="2025-09-08 23:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:48:04.301088884 +0000 UTC m=+90.386418562" watchObservedRunningTime="2025-09-08 23:48:07.108511937 +0000 UTC m=+93.193841615" Sep 8 23:48:07.287838 kubelet[2638]: E0908 23:48:07.287810 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:07.477557 containerd[1500]: time="2025-09-08T23:48:07.477433568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" id:\"578e76f14b4f599242a59926c7e8321c3bec93096b7b0242e313035a8ef5563f\" pid:5017 exited_at:{seconds:1757375287 nanos:477062528}" Sep 8 23:48:08.004106 kubelet[2638]: E0908 23:48:08.004009 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:08.290444 kubelet[2638]: E0908 23:48:08.290411 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:48:08.349531 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 8 23:48:09.622746 containerd[1500]: time="2025-09-08T23:48:09.622704649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" id:\"132136d6d47cfc69848053c37ca0bd0a24e2f9c2939ffda183249bf4504bf27f\" pid:5050 exited_at:{seconds:1757375289 nanos:621953850}" Sep 8 23:48:11.729657 containerd[1500]: time="2025-09-08T23:48:11.729264958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab0fbfd981057a706a21a394fd2782eaadaa0ae7f6f5e62355f42d0559e69c9e\" id:\"a2fd7185fba6914d157726e2c4b9436e8f1c784a06a29f24afc4ace19cf07f0a\" pid:5080 exited_at:{seconds:1757375291 nanos:728848238}" Sep 8 23:48:11.737450 sshd[4245]: Connection closed by 10.0.0.1 port 50376 Sep 8 23:48:11.738045 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 8 23:48:11.741741 systemd-logind[1482]: Session 27 logged out. Waiting for processes to exit. Sep 8 23:48:11.742095 systemd[1]: sshd@26-10.0.0.103:22-10.0.0.1:50376.service: Deactivated successfully. Sep 8 23:48:11.744153 systemd[1]: session-27.scope: Deactivated successfully. Sep 8 23:48:11.745305 systemd-logind[1482]: Removed session 27.