Sep 4 15:44:48.748859 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 15:44:48.748881 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 4 14:32:27 -00 2025 Sep 4 15:44:48.748890 kernel: KASLR enabled Sep 4 15:44:48.748896 kernel: efi: EFI v2.7 by EDK II Sep 4 15:44:48.748902 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 4 15:44:48.748908 kernel: random: crng init done Sep 4 15:44:48.748916 kernel: secureboot: Secure boot disabled Sep 4 15:44:48.748922 kernel: ACPI: Early table checksum verification disabled Sep 4 15:44:48.748930 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 4 15:44:48.748936 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 15:44:48.748943 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748949 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748956 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748963 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748972 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748979 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748986 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.748992 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.749000 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:44:48.749007 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 15:44:48.749013 kernel: ACPI: Use ACPI SPCR as default console: No Sep 4 15:44:48.749020 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:44:48.749028 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 4 15:44:48.749035 kernel: Zone ranges: Sep 4 15:44:48.749042 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:44:48.749048 kernel: DMA32 empty Sep 4 15:44:48.749055 kernel: Normal empty Sep 4 15:44:48.749062 kernel: Device empty Sep 4 15:44:48.749068 kernel: Movable zone start for each node Sep 4 15:44:48.749075 kernel: Early memory node ranges Sep 4 15:44:48.749082 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 4 15:44:48.749089 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 4 15:44:48.749096 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 4 15:44:48.749103 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 4 15:44:48.749111 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 4 15:44:48.749124 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 4 15:44:48.749131 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 4 15:44:48.749138 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 4 15:44:48.749144 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 4 15:44:48.749151 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 15:44:48.749173 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 15:44:48.749181 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 15:44:48.749188 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 15:44:48.749196 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:44:48.749203 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 15:44:48.749211 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 4 15:44:48.749218 kernel: psci: probing for conduit method from ACPI. Sep 4 15:44:48.749226 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 15:44:48.749234 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 15:44:48.749242 kernel: psci: Trusted OS migration not required Sep 4 15:44:48.749249 kernel: psci: SMC Calling Convention v1.1 Sep 4 15:44:48.749257 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 15:44:48.749264 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 4 15:44:48.749272 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 4 15:44:48.749279 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 15:44:48.749287 kernel: Detected PIPT I-cache on CPU0 Sep 4 15:44:48.749294 kernel: CPU features: detected: GIC system register CPU interface Sep 4 15:44:48.749323 kernel: CPU features: detected: Spectre-v4 Sep 4 15:44:48.749331 kernel: CPU features: detected: Spectre-BHB Sep 4 15:44:48.749339 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 15:44:48.749347 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 15:44:48.749354 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 15:44:48.749362 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 15:44:48.749369 kernel: alternatives: applying boot alternatives Sep 4 15:44:48.749378 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 15:44:48.749385 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 15:44:48.749393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 15:44:48.749400 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 15:44:48.749408 kernel: Fallback order for Node 0: 0 Sep 4 15:44:48.749416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 4 15:44:48.749424 kernel: Policy zone: DMA Sep 4 15:44:48.749431 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 15:44:48.749439 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 4 15:44:48.749446 kernel: software IO TLB: area num 4. Sep 4 15:44:48.749453 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 4 15:44:48.749461 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 4 15:44:48.749468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 15:44:48.749475 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 15:44:48.749483 kernel: rcu: RCU event tracing is enabled. Sep 4 15:44:48.749491 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 15:44:48.749500 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 15:44:48.749507 kernel: Tracing variant of Tasks RCU enabled. Sep 4 15:44:48.749514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 15:44:48.749522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 15:44:48.749529 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 15:44:48.749536 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 15:44:48.749543 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 15:44:48.749550 kernel: GICv3: 256 SPIs implemented Sep 4 15:44:48.749557 kernel: GICv3: 0 Extended SPIs implemented Sep 4 15:44:48.749565 kernel: Root IRQ handler: gic_handle_irq Sep 4 15:44:48.749572 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 15:44:48.749580 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 4 15:44:48.749587 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 15:44:48.749595 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 15:44:48.749602 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 4 15:44:48.749609 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 4 15:44:48.749616 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 4 15:44:48.749623 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 4 15:44:48.749631 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 15:44:48.749638 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:44:48.749645 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 15:44:48.749653 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 15:44:48.749661 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 15:44:48.749668 kernel: arm-pv: using stolen time PV Sep 4 15:44:48.749676 kernel: Console: colour dummy device 80x25 Sep 4 15:44:48.749684 kernel: ACPI: Core revision 20240827 Sep 4 15:44:48.749691 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 15:44:48.749699 kernel: pid_max: default: 32768 minimum: 301 Sep 4 15:44:48.749706 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 15:44:48.749714 kernel: landlock: Up and running. Sep 4 15:44:48.749722 kernel: SELinux: Initializing. Sep 4 15:44:48.749730 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 15:44:48.749737 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 15:44:48.749745 kernel: rcu: Hierarchical SRCU implementation. Sep 4 15:44:48.749752 kernel: rcu: Max phase no-delay instances is 400. Sep 4 15:44:48.749760 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 15:44:48.749768 kernel: Remapping and enabling EFI services. Sep 4 15:44:48.749777 kernel: smp: Bringing up secondary CPUs ... Sep 4 15:44:48.749793 kernel: Detected PIPT I-cache on CPU1 Sep 4 15:44:48.749803 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 15:44:48.749812 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 4 15:44:48.749820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:44:48.749828 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 15:44:48.749836 kernel: Detected PIPT I-cache on CPU2 Sep 4 15:44:48.749844 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 15:44:48.749854 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 4 15:44:48.749862 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:44:48.749870 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 15:44:48.749878 kernel: Detected PIPT I-cache on CPU3 Sep 4 15:44:48.749885 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 15:44:48.749893 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 4 15:44:48.749903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:44:48.749910 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 15:44:48.749918 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 15:44:48.749926 kernel: SMP: Total of 4 processors activated. Sep 4 15:44:48.749934 kernel: CPU: All CPU(s) started at EL1 Sep 4 15:44:48.749942 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 15:44:48.749950 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 15:44:48.749958 kernel: CPU features: detected: Common not Private translations Sep 4 15:44:48.749967 kernel: CPU features: detected: CRC32 instructions Sep 4 15:44:48.749975 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 15:44:48.749983 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 15:44:48.749991 kernel: CPU features: detected: LSE atomic instructions Sep 4 15:44:48.749998 kernel: CPU features: detected: Privileged Access Never Sep 4 15:44:48.750006 kernel: CPU features: detected: RAS Extension Support Sep 4 15:44:48.750014 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 15:44:48.750023 kernel: alternatives: applying system-wide alternatives Sep 4 15:44:48.750031 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 4 15:44:48.750040 kernel: Memory: 2424352K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 39104K init, 1038K bss, 125600K reserved, 16384K cma-reserved) Sep 4 15:44:48.750048 kernel: devtmpfs: initialized Sep 4 15:44:48.750056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 15:44:48.750064 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 15:44:48.750072 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 15:44:48.750081 kernel: 0 pages in range for non-PLT usage Sep 4 15:44:48.750092 kernel: 508528 pages in range for PLT usage Sep 4 15:44:48.750100 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 15:44:48.750109 kernel: SMBIOS 3.0.0 present. Sep 4 15:44:48.750117 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 4 15:44:48.750125 kernel: DMI: Memory slots populated: 1/1 Sep 4 15:44:48.750133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 15:44:48.750142 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 15:44:48.750150 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 15:44:48.750186 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 15:44:48.750195 kernel: audit: initializing netlink subsys (disabled) Sep 4 15:44:48.750203 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Sep 4 15:44:48.750212 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 15:44:48.750219 kernel: cpuidle: using governor menu Sep 4 15:44:48.750230 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 15:44:48.750238 kernel: ASID allocator initialised with 32768 entries Sep 4 15:44:48.750247 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 15:44:48.750255 kernel: Serial: AMBA PL011 UART driver Sep 4 15:44:48.750262 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 15:44:48.750271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 15:44:48.750279 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 15:44:48.750287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 15:44:48.750296 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 15:44:48.750304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 15:44:48.750312 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 15:44:48.750320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 15:44:48.750328 kernel: ACPI: Added _OSI(Module Device) Sep 4 15:44:48.750336 kernel: ACPI: Added _OSI(Processor Device) Sep 4 15:44:48.750345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 15:44:48.750354 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 15:44:48.750362 kernel: ACPI: Interpreter enabled Sep 4 15:44:48.750370 kernel: ACPI: Using GIC for interrupt routing Sep 4 15:44:48.750377 kernel: ACPI: MCFG table detected, 1 entries Sep 4 15:44:48.750401 kernel: ACPI: CPU0 has been hot-added Sep 4 15:44:48.750409 kernel: ACPI: CPU1 has been hot-added Sep 4 15:44:48.750418 kernel: ACPI: CPU2 has been hot-added Sep 4 15:44:48.750427 kernel: ACPI: CPU3 has been hot-added Sep 4 15:44:48.750435 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 15:44:48.750443 kernel: printk: legacy console [ttyAMA0] enabled Sep 4 15:44:48.750451 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 15:44:48.750599 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 15:44:48.750685 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 15:44:48.750768 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 15:44:48.750861 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 15:44:48.750943 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 15:44:48.750953 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 15:44:48.750961 kernel: PCI host bridge to bus 0000:00 Sep 4 15:44:48.751053 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 15:44:48.751132 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 15:44:48.751228 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 15:44:48.751304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 15:44:48.751400 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 4 15:44:48.751490 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 15:44:48.751571 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 4 15:44:48.751661 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 4 15:44:48.751776 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 15:44:48.751881 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 4 15:44:48.751976 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 4 15:44:48.752075 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 4 15:44:48.752149 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 15:44:48.752234 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 15:44:48.752307 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 15:44:48.752317 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 15:44:48.752325 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 15:44:48.752333 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 15:44:48.752341 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 15:44:48.752351 kernel: iommu: Default domain type: Translated Sep 4 15:44:48.752359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 15:44:48.752368 kernel: efivars: Registered efivars operations Sep 4 15:44:48.752375 kernel: vgaarb: loaded Sep 4 15:44:48.752383 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 15:44:48.752391 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 15:44:48.752399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 15:44:48.752409 kernel: pnp: PnP ACPI init Sep 4 15:44:48.752499 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 15:44:48.752510 kernel: pnp: PnP ACPI: found 1 devices Sep 4 15:44:48.752518 kernel: NET: Registered PF_INET protocol family Sep 4 15:44:48.752526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 15:44:48.752534 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 15:44:48.752542 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 15:44:48.752552 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 15:44:48.752560 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 15:44:48.752568 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 15:44:48.752576 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 15:44:48.752584 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 15:44:48.752592 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 15:44:48.752600 kernel: PCI: CLS 0 bytes, default 64 Sep 4 15:44:48.752609 kernel: kvm [1]: HYP mode not available Sep 4 15:44:48.752617 kernel: Initialise system trusted keyrings Sep 4 15:44:48.752624 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 15:44:48.752632 kernel: Key type asymmetric registered Sep 4 15:44:48.752640 kernel: Asymmetric key parser 'x509' registered Sep 4 15:44:48.752649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 4 15:44:48.752656 kernel: io scheduler mq-deadline registered Sep 4 15:44:48.752665 kernel: io scheduler kyber registered Sep 4 15:44:48.752674 kernel: io scheduler bfq registered Sep 4 15:44:48.752682 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 15:44:48.752690 kernel: ACPI: button: Power Button [PWRB] Sep 4 15:44:48.752698 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 15:44:48.752778 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 15:44:48.752795 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 15:44:48.752806 kernel: thunder_xcv, ver 1.0 Sep 4 15:44:48.752814 kernel: thunder_bgx, ver 1.0 Sep 4 15:44:48.752822 kernel: nicpf, ver 1.0 Sep 4 15:44:48.752830 kernel: nicvf, ver 1.0 Sep 4 15:44:48.752920 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 15:44:48.752997 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T15:44:48 UTC (1757000688) Sep 4 15:44:48.753007 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 15:44:48.753018 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 4 15:44:48.753025 kernel: watchdog: NMI not fully supported Sep 4 15:44:48.753033 kernel: watchdog: Hard watchdog permanently disabled Sep 4 15:44:48.753041 kernel: NET: Registered PF_INET6 protocol family Sep 4 15:44:48.753049 kernel: Segment Routing with IPv6 Sep 4 15:44:48.753057 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 15:44:48.753065 kernel: NET: Registered PF_PACKET protocol family Sep 4 15:44:48.753075 kernel: Key type dns_resolver registered Sep 4 15:44:48.753082 kernel: registered taskstats version 1 Sep 4 15:44:48.753090 kernel: Loading compiled-in X.509 certificates Sep 4 15:44:48.753098 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 5cbaeb2a956cf8364fe17c89324cc000891c1e4c' Sep 4 15:44:48.753106 kernel: Demotion targets for Node 0: null Sep 4 15:44:48.753114 kernel: Key type .fscrypt registered Sep 4 15:44:48.753122 kernel: Key type fscrypt-provisioning registered Sep 4 15:44:48.753131 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 15:44:48.753139 kernel: ima: Allocated hash algorithm: sha1 Sep 4 15:44:48.753147 kernel: ima: No architecture policies found Sep 4 15:44:48.753155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 15:44:48.753170 kernel: clk: Disabling unused clocks Sep 4 15:44:48.753179 kernel: PM: genpd: Disabling unused power domains Sep 4 15:44:48.753186 kernel: Warning: unable to open an initial console. Sep 4 15:44:48.753197 kernel: Freeing unused kernel memory: 39104K Sep 4 15:44:48.753205 kernel: Run /init as init process Sep 4 15:44:48.753213 kernel: with arguments: Sep 4 15:44:48.753221 kernel: /init Sep 4 15:44:48.753228 kernel: with environment: Sep 4 15:44:48.753236 kernel: HOME=/ Sep 4 15:44:48.753244 kernel: TERM=linux Sep 4 15:44:48.753253 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 15:44:48.753262 systemd[1]: Successfully made /usr/ read-only. Sep 4 15:44:48.753273 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 15:44:48.753282 systemd[1]: Detected virtualization kvm. Sep 4 15:44:48.753290 systemd[1]: Detected architecture arm64. Sep 4 15:44:48.753299 systemd[1]: Running in initrd. Sep 4 15:44:48.753308 systemd[1]: No hostname configured, using default hostname. Sep 4 15:44:48.753316 systemd[1]: Hostname set to . Sep 4 15:44:48.753325 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 15:44:48.753333 systemd[1]: Queued start job for default target initrd.target. Sep 4 15:44:48.753342 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:44:48.753351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:44:48.753361 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 15:44:48.753369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 15:44:48.753378 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 15:44:48.753404 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 15:44:48.753415 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 15:44:48.753424 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 15:44:48.753434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:44:48.753443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:44:48.753452 systemd[1]: Reached target paths.target - Path Units. Sep 4 15:44:48.753460 systemd[1]: Reached target slices.target - Slice Units. Sep 4 15:44:48.753468 systemd[1]: Reached target swap.target - Swaps. Sep 4 15:44:48.753477 systemd[1]: Reached target timers.target - Timer Units. Sep 4 15:44:48.753485 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 15:44:48.753495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 15:44:48.753503 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 15:44:48.753512 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 15:44:48.753521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:44:48.753529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 15:44:48.753538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:44:48.753547 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 15:44:48.753556 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 15:44:48.753564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 15:44:48.753573 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 15:44:48.753582 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 15:44:48.753590 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 15:44:48.753599 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 15:44:48.753608 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 15:44:48.753617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:44:48.753625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:44:48.753634 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 15:44:48.753644 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 15:44:48.753652 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 15:44:48.753661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:44:48.753686 systemd-journald[245]: Collecting audit messages is disabled. Sep 4 15:44:48.753707 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 15:44:48.753716 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 15:44:48.753725 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:44:48.753734 systemd-journald[245]: Journal started Sep 4 15:44:48.753754 systemd-journald[245]: Runtime Journal (/run/log/journal/99938f6d64f04f9d8f95821b8117982c) is 6M, max 48.5M, 42.4M free. Sep 4 15:44:48.741325 systemd-modules-load[246]: Inserted module 'overlay' Sep 4 15:44:48.756390 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 4 15:44:48.757697 kernel: Bridge firewalling registered Sep 4 15:44:48.757715 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 15:44:48.760268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 15:44:48.763445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:44:48.765559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 15:44:48.769697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 15:44:48.776573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 15:44:48.776609 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 15:44:48.778500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:44:48.782363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:44:48.784106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:44:48.787515 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 15:44:48.789540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 15:44:48.810534 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 15:44:48.823997 systemd-resolved[292]: Positive Trust Anchors: Sep 4 15:44:48.824015 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 15:44:48.824018 systemd-resolved[292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 15:44:48.824050 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 15:44:48.828962 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 4 15:44:48.829833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 15:44:48.833191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:44:48.883184 kernel: SCSI subsystem initialized Sep 4 15:44:48.887185 kernel: Loading iSCSI transport class v2.0-870. Sep 4 15:44:48.895204 kernel: iscsi: registered transport (tcp) Sep 4 15:44:48.907483 kernel: iscsi: registered transport (qla4xxx) Sep 4 15:44:48.907512 kernel: QLogic iSCSI HBA Driver Sep 4 15:44:48.923391 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 15:44:48.938495 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:44:48.939780 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 15:44:48.983741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 15:44:48.985852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 15:44:49.046187 kernel: raid6: neonx8 gen() 15766 MB/s Sep 4 15:44:49.063176 kernel: raid6: neonx4 gen() 15804 MB/s Sep 4 15:44:49.080173 kernel: raid6: neonx2 gen() 13182 MB/s Sep 4 15:44:49.097177 kernel: raid6: neonx1 gen() 10423 MB/s Sep 4 15:44:49.114173 kernel: raid6: int64x8 gen() 6895 MB/s Sep 4 15:44:49.131176 kernel: raid6: int64x4 gen() 7352 MB/s Sep 4 15:44:49.148188 kernel: raid6: int64x2 gen() 6104 MB/s Sep 4 15:44:49.165178 kernel: raid6: int64x1 gen() 5040 MB/s Sep 4 15:44:49.165198 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s Sep 4 15:44:49.182180 kernel: raid6: .... xor() 12385 MB/s, rmw enabled Sep 4 15:44:49.182194 kernel: raid6: using neon recovery algorithm Sep 4 15:44:49.187268 kernel: xor: measuring software checksum speed Sep 4 15:44:49.187290 kernel: 8regs : 21031 MB/sec Sep 4 15:44:49.188343 kernel: 32regs : 21681 MB/sec Sep 4 15:44:49.188357 kernel: arm64_neon : 28118 MB/sec Sep 4 15:44:49.188367 kernel: xor: using function: arm64_neon (28118 MB/sec) Sep 4 15:44:49.240187 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 15:44:49.248222 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 15:44:49.250484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:44:49.276670 systemd-udevd[502]: Using default interface naming scheme 'v257'. Sep 4 15:44:49.280810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:44:49.282988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 15:44:49.306536 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Sep 4 15:44:49.328397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 15:44:49.330415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 15:44:49.381421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:44:49.384007 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 15:44:49.430872 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 15:44:49.431081 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 15:44:49.439405 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 15:44:49.439446 kernel: GPT:9289727 != 19775487 Sep 4 15:44:49.439457 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 15:44:49.440287 kernel: GPT:9289727 != 19775487 Sep 4 15:44:49.440307 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 15:44:49.441174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:44:49.441804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 15:44:49.441956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:44:49.443962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:44:49.446137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:44:49.469109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:44:49.476687 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 15:44:49.483857 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 15:44:49.485032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 15:44:49.496314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 15:44:49.497229 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 15:44:49.505139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 15:44:49.506035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 15:44:49.507677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:44:49.509307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 15:44:49.511404 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 15:44:49.512915 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 15:44:49.526351 disk-uuid[595]: Primary Header is updated. Sep 4 15:44:49.526351 disk-uuid[595]: Secondary Entries is updated. Sep 4 15:44:49.526351 disk-uuid[595]: Secondary Header is updated. Sep 4 15:44:49.529377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 15:44:49.532199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:44:49.535173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:44:50.538271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:44:50.539213 disk-uuid[598]: The operation has completed successfully. Sep 4 15:44:50.567383 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 15:44:50.567473 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 15:44:50.591888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 15:44:50.617010 sh[615]: Success Sep 4 15:44:50.629580 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 15:44:50.629620 kernel: device-mapper: uevent: version 1.0.3 Sep 4 15:44:50.629640 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 15:44:50.636343 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 4 15:44:50.660003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 15:44:50.662506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 15:44:50.683032 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 15:44:50.688604 kernel: BTRFS: device fsid d6826f11-765e-43ab-9425-5cf9fd7ef603 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (628) Sep 4 15:44:50.688635 kernel: BTRFS info (device dm-0): first mount of filesystem d6826f11-765e-43ab-9425-5cf9fd7ef603 Sep 4 15:44:50.688647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:44:50.693177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 15:44:50.693210 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 15:44:50.693998 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 15:44:50.695093 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 15:44:50.696087 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 15:44:50.696814 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 15:44:50.699504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 15:44:50.717441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 4 15:44:50.717480 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:44:50.717493 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:44:50.720253 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:44:50.720285 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:44:50.724175 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:44:50.725136 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 15:44:50.727830 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 15:44:50.794805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 15:44:50.797406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 15:44:50.824072 ignition[704]: Ignition 2.22.0 Sep 4 15:44:50.824087 ignition[704]: Stage: fetch-offline Sep 4 15:44:50.824112 ignition[704]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:50.824119 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:50.824204 ignition[704]: parsed url from cmdline: "" Sep 4 15:44:50.824206 ignition[704]: no config URL provided Sep 4 15:44:50.824212 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 15:44:50.824218 ignition[704]: no config at "/usr/lib/ignition/user.ign" Sep 4 15:44:50.824236 ignition[704]: op(1): [started] loading QEMU firmware config module Sep 4 15:44:50.824240 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 15:44:50.830490 ignition[704]: op(1): [finished] loading QEMU firmware config module Sep 4 15:44:50.834548 systemd-networkd[806]: lo: Link UP Sep 4 15:44:50.834559 systemd-networkd[806]: lo: Gained carrier Sep 4 15:44:50.835545 systemd-networkd[806]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:44:50.835549 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 15:44:50.835955 systemd-networkd[806]: eth0: Link UP Sep 4 15:44:50.836310 systemd-networkd[806]: eth0: Gained carrier Sep 4 15:44:50.836318 systemd-networkd[806]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:44:50.836547 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 15:44:50.840009 systemd[1]: Reached target network.target - Network. Sep 4 15:44:50.857194 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 15:44:50.878152 ignition[704]: parsing config with SHA512: b11f521d9b218923f905b87e2ff5f1453f4c10438b242d1840c705ef207bcf31587bf38ad6325379e88a148454e63e071efd1e97f9c31b0eeabc7e1432b7a465 Sep 4 15:44:50.883285 unknown[704]: fetched base config from "system" Sep 4 15:44:50.883295 unknown[704]: fetched user config from "qemu" Sep 4 15:44:50.883662 ignition[704]: fetch-offline: fetch-offline passed Sep 4 15:44:50.883716 ignition[704]: Ignition finished successfully Sep 4 15:44:50.885402 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 15:44:50.886681 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 15:44:50.887492 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 15:44:50.918963 ignition[815]: Ignition 2.22.0 Sep 4 15:44:50.918976 ignition[815]: Stage: kargs Sep 4 15:44:50.919093 ignition[815]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:50.919101 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:50.919839 ignition[815]: kargs: kargs passed Sep 4 15:44:50.919879 ignition[815]: Ignition finished successfully Sep 4 15:44:50.922101 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 15:44:50.923883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 15:44:50.948828 ignition[823]: Ignition 2.22.0 Sep 4 15:44:50.948843 ignition[823]: Stage: disks Sep 4 15:44:50.948960 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:50.948968 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:50.949691 ignition[823]: disks: disks passed Sep 4 15:44:50.949728 ignition[823]: Ignition finished successfully Sep 4 15:44:50.952695 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 15:44:50.953933 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 15:44:50.955095 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 15:44:50.956631 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 15:44:50.957990 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 15:44:50.959577 systemd[1]: Reached target basic.target - Basic System. Sep 4 15:44:50.961627 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 15:44:50.983622 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 15:44:50.987496 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 15:44:50.989727 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 15:44:51.043188 kernel: EXT4-fs (vda9): mounted filesystem 1afcf1f8-650a-49cc-971e-a57f02cf6533 r/w with ordered data mode. Quota mode: none. Sep 4 15:44:51.043725 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 15:44:51.044817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 15:44:51.046765 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 15:44:51.048225 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 15:44:51.048983 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 15:44:51.049023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 15:44:51.049047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 15:44:51.059428 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 15:44:51.061659 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 15:44:51.067461 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Sep 4 15:44:51.067485 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:44:51.067497 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:44:51.067508 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:44:51.067519 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:44:51.068015 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 15:44:51.104252 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 15:44:51.107878 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Sep 4 15:44:51.111596 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 15:44:51.114126 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 15:44:51.175428 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 15:44:51.177422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 15:44:51.178744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 15:44:51.196173 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:44:51.203352 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 15:44:51.214099 ignition[957]: INFO : Ignition 2.22.0 Sep 4 15:44:51.214099 ignition[957]: INFO : Stage: mount Sep 4 15:44:51.216231 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:51.216231 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:51.216231 ignition[957]: INFO : mount: mount passed Sep 4 15:44:51.216231 ignition[957]: INFO : Ignition finished successfully Sep 4 15:44:51.216868 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 15:44:51.218571 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 15:44:51.814073 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 15:44:51.815546 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 15:44:51.830207 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 4 15:44:51.832531 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:44:51.832567 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:44:51.835175 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:44:51.835201 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:44:51.835955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 15:44:51.867304 ignition[985]: INFO : Ignition 2.22.0 Sep 4 15:44:51.867304 ignition[985]: INFO : Stage: files Sep 4 15:44:51.868701 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:51.868701 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:51.868701 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 4 15:44:51.871550 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 15:44:51.871550 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 15:44:51.873881 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 15:44:51.873881 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 15:44:51.873881 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 15:44:51.873369 unknown[985]: wrote ssh authorized keys file for user: core Sep 4 15:44:51.878222 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 15:44:51.878222 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 4 15:44:51.934980 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 15:44:52.448021 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 15:44:52.448021 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 15:44:52.451154 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 15:44:52.672339 systemd-networkd[806]: eth0: Gained IPv6LL Sep 4 15:44:52.735934 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 15:44:52.881284 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 15:44:52.883009 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 15:44:52.893309 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 4 15:44:53.474536 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 15:44:54.259228 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 15:44:54.259228 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 15:44:54.262276 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 15:44:54.265327 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 15:44:54.265327 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 15:44:54.265327 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 15:44:54.269118 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 15:44:54.269118 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 15:44:54.269118 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 15:44:54.269118 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 15:44:54.284267 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 15:44:54.288602 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 15:44:54.291276 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 15:44:54.291276 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 15:44:54.291276 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 15:44:54.291276 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 15:44:54.291276 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 15:44:54.291276 ignition[985]: INFO : files: files passed Sep 4 15:44:54.291276 ignition[985]: INFO : Ignition finished successfully Sep 4 15:44:54.294307 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 15:44:54.297339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 15:44:54.304997 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 15:44:54.319486 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 15:44:54.321193 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 15:44:54.323907 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 15:44:54.329611 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:44:54.329611 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:44:54.331997 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:44:54.337321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 15:44:54.338534 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 15:44:54.340539 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 15:44:54.378278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 15:44:54.378388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 15:44:54.380204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 15:44:54.381673 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 15:44:54.382997 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 15:44:54.383720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 15:44:54.409227 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 15:44:54.415482 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 15:44:54.431843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:44:54.433466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:44:54.434425 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 15:44:54.436840 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 15:44:54.436950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 15:44:54.439328 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 15:44:54.440906 systemd[1]: Stopped target basic.target - Basic System. Sep 4 15:44:54.441671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 15:44:54.443945 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 15:44:54.444915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 15:44:54.447493 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 15:44:54.448377 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 15:44:54.450757 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 15:44:54.451789 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 15:44:54.454396 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 15:44:54.455222 systemd[1]: Stopped target swap.target - Swaps. Sep 4 15:44:54.457426 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 15:44:54.457551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 15:44:54.459731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:44:54.461514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:44:54.462430 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 15:44:54.463999 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:44:54.466039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 15:44:54.466145 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 15:44:54.468361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 15:44:54.468477 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 15:44:54.470112 systemd[1]: Stopped target paths.target - Path Units. Sep 4 15:44:54.471486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 15:44:54.477213 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:44:54.478174 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 15:44:54.479909 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 15:44:54.481081 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 15:44:54.481155 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 15:44:54.482459 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 15:44:54.482528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 15:44:54.483787 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 15:44:54.483897 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 15:44:54.485182 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 15:44:54.485276 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 15:44:54.487212 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 15:44:54.489116 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 15:44:54.489841 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 15:44:54.489963 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:44:54.491422 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 15:44:54.491516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:44:54.492850 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 15:44:54.492944 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 15:44:54.497434 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 15:44:54.498268 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 15:44:54.506015 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 15:44:54.513800 ignition[1041]: INFO : Ignition 2.22.0 Sep 4 15:44:54.513800 ignition[1041]: INFO : Stage: umount Sep 4 15:44:54.516321 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:44:54.516321 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:44:54.516321 ignition[1041]: INFO : umount: umount passed Sep 4 15:44:54.516321 ignition[1041]: INFO : Ignition finished successfully Sep 4 15:44:54.517380 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 15:44:54.517494 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 15:44:54.519001 systemd[1]: Stopped target network.target - Network. Sep 4 15:44:54.520104 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 15:44:54.520176 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 15:44:54.521421 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 15:44:54.521458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 15:44:54.522929 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 15:44:54.522973 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 15:44:54.527520 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 15:44:54.527563 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 15:44:54.528937 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 15:44:54.530224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 15:44:54.538352 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 15:44:54.538468 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 15:44:54.542483 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 15:44:54.542602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 15:44:54.545932 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 15:44:54.546850 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 15:44:54.546882 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:44:54.549053 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 15:44:54.550533 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 15:44:54.550579 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 15:44:54.552145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 15:44:54.552829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:44:54.554652 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 15:44:54.554693 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 15:44:54.556046 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:44:54.568463 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 15:44:54.568605 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:44:54.570854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 15:44:54.570909 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 15:44:54.572096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 15:44:54.572124 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:44:54.573566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 15:44:54.573609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 15:44:54.575777 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 15:44:54.575839 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 15:44:54.578008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 15:44:54.578055 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 15:44:54.582857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 15:44:54.583744 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 15:44:54.583809 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:44:54.585273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 15:44:54.585315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:44:54.586975 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 15:44:54.587012 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:44:54.588607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 15:44:54.588645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:44:54.590055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 15:44:54.590092 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:44:54.592448 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 15:44:54.598274 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 15:44:54.599326 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 15:44:54.599413 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 15:44:54.601132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 15:44:54.601223 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 15:44:54.602908 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 15:44:54.602994 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 15:44:54.604438 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 15:44:54.606034 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 15:44:54.620453 systemd[1]: Switching root. Sep 4 15:44:54.643147 systemd-journald[245]: Journal stopped Sep 4 15:44:55.358768 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 4 15:44:55.358829 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 15:44:55.358846 kernel: SELinux: policy capability open_perms=1 Sep 4 15:44:55.358859 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 15:44:55.358869 kernel: SELinux: policy capability always_check_network=0 Sep 4 15:44:55.358882 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 15:44:55.358892 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 15:44:55.358903 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 15:44:55.358912 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 15:44:55.358935 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 15:44:55.358945 kernel: audit: type=1403 audit(1757000694.840:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 15:44:55.358960 systemd[1]: Successfully loaded SELinux policy in 65.013ms. Sep 4 15:44:55.358980 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.034ms. Sep 4 15:44:55.358992 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 15:44:55.359006 systemd[1]: Detected virtualization kvm. Sep 4 15:44:55.359017 systemd[1]: Detected architecture arm64. Sep 4 15:44:55.359028 systemd[1]: Detected first boot. Sep 4 15:44:55.359038 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 15:44:55.359049 zram_generator::config[1087]: No configuration found. Sep 4 15:44:55.359061 kernel: NET: Registered PF_VSOCK protocol family Sep 4 15:44:55.359072 systemd[1]: Populated /etc with preset unit settings. Sep 4 15:44:55.359082 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 15:44:55.359093 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 15:44:55.359103 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 15:44:55.359114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 15:44:55.359126 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 15:44:55.359137 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 15:44:55.359147 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 15:44:55.359532 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 15:44:55.359619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 15:44:55.359634 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 15:44:55.359645 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 15:44:55.359658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:44:55.359670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:44:55.359680 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 15:44:55.359691 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 15:44:55.359701 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 15:44:55.359712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 15:44:55.359722 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 15:44:55.359734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:44:55.359744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:44:55.359755 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 15:44:55.359772 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 15:44:55.359792 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 15:44:55.359806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 15:44:55.359819 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:44:55.359830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 15:44:55.359841 systemd[1]: Reached target slices.target - Slice Units. Sep 4 15:44:55.359851 systemd[1]: Reached target swap.target - Swaps. Sep 4 15:44:55.359861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 15:44:55.359872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 15:44:55.359883 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 15:44:55.359893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:44:55.359905 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 15:44:55.359916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:44:55.359926 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 15:44:55.359937 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 15:44:55.359947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 15:44:55.359963 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 15:44:55.359974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 15:44:55.359985 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 15:44:55.359996 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 15:44:55.360008 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 15:44:55.360019 systemd[1]: Reached target machines.target - Containers. Sep 4 15:44:55.360029 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 15:44:55.360039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:44:55.360055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 15:44:55.360066 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 15:44:55.360076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 15:44:55.360086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 15:44:55.360096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 15:44:55.360107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 15:44:55.360118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 15:44:55.360130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 15:44:55.360140 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 15:44:55.360150 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 15:44:55.360174 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 15:44:55.360184 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 15:44:55.360196 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:44:55.360207 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 15:44:55.360219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 15:44:55.360230 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 15:44:55.360240 kernel: fuse: init (API version 7.41) Sep 4 15:44:55.360251 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 15:44:55.360261 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 15:44:55.360271 kernel: loop: module loaded Sep 4 15:44:55.360283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 15:44:55.360294 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 15:44:55.360304 systemd[1]: Stopped verity-setup.service. Sep 4 15:44:55.360315 kernel: ACPI: bus type drm_connector registered Sep 4 15:44:55.360325 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 15:44:55.360359 systemd-journald[1159]: Collecting audit messages is disabled. Sep 4 15:44:55.360382 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 15:44:55.360394 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 15:44:55.360405 systemd-journald[1159]: Journal started Sep 4 15:44:55.360425 systemd-journald[1159]: Runtime Journal (/run/log/journal/99938f6d64f04f9d8f95821b8117982c) is 6M, max 48.5M, 42.4M free. Sep 4 15:44:55.187029 systemd[1]: Queued start job for default target multi-user.target. Sep 4 15:44:55.196998 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 15:44:55.197396 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 15:44:55.363221 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 15:44:55.364125 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 15:44:55.365209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 15:44:55.366098 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 15:44:55.368241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 15:44:55.369493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:44:55.370658 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 15:44:55.370830 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 15:44:55.372049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 15:44:55.372247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 15:44:55.373301 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 15:44:55.373452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 15:44:55.374471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 15:44:55.374627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 15:44:55.375821 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 15:44:55.375968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 15:44:55.377092 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 15:44:55.377399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 15:44:55.378467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 15:44:55.379860 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:44:55.381684 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 15:44:55.383020 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 15:44:55.394077 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 15:44:55.395523 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 15:44:55.397465 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 15:44:55.399114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 15:44:55.400169 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 15:44:55.400203 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 15:44:55.401728 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 15:44:55.402954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:44:55.404905 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 15:44:55.406614 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 15:44:55.407542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 15:44:55.408862 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 15:44:55.409878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 15:44:55.412287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:44:55.413939 systemd-journald[1159]: Time spent on flushing to /var/log/journal/99938f6d64f04f9d8f95821b8117982c is 12.097ms for 885 entries. Sep 4 15:44:55.413939 systemd-journald[1159]: System Journal (/var/log/journal/99938f6d64f04f9d8f95821b8117982c) is 8M, max 195.6M, 187.6M free. Sep 4 15:44:55.435931 systemd-journald[1159]: Received client request to flush runtime journal. Sep 4 15:44:55.435978 kernel: loop0: detected capacity change from 0 to 119320 Sep 4 15:44:55.415304 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 15:44:55.417761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 15:44:55.421208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:44:55.422809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 15:44:55.423885 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 15:44:55.425154 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 15:44:55.429697 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 15:44:55.433317 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 15:44:55.441225 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 15:44:55.441806 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 4 15:44:55.441822 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 4 15:44:55.445204 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 15:44:55.445441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:44:55.449248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:44:55.452144 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 15:44:55.459206 kernel: loop1: detected capacity change from 0 to 100608 Sep 4 15:44:55.466014 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 15:44:55.477355 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 15:44:55.479722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 15:44:55.481480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 15:44:55.488187 kernel: loop2: detected capacity change from 0 to 211168 Sep 4 15:44:55.488404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 15:44:55.501402 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 4 15:44:55.501420 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 4 15:44:55.504528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:44:55.513181 kernel: loop3: detected capacity change from 0 to 119320 Sep 4 15:44:55.518205 kernel: loop4: detected capacity change from 0 to 100608 Sep 4 15:44:55.520552 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 15:44:55.522241 kernel: loop5: detected capacity change from 0 to 211168 Sep 4 15:44:55.526491 (sd-merge)[1231]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 15:44:55.528882 (sd-merge)[1231]: Merged extensions into '/usr'. Sep 4 15:44:55.531947 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 15:44:55.531963 systemd[1]: Reloading... Sep 4 15:44:55.580408 zram_generator::config[1261]: No configuration found. Sep 4 15:44:55.592017 systemd-resolved[1225]: Positive Trust Anchors: Sep 4 15:44:55.592033 systemd-resolved[1225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 15:44:55.592037 systemd-resolved[1225]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 15:44:55.592068 systemd-resolved[1225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 15:44:55.599334 systemd-resolved[1225]: Defaulting to hostname 'linux'. Sep 4 15:44:55.723263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 15:44:55.723422 systemd[1]: Reloading finished in 191 ms. Sep 4 15:44:55.752772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 15:44:55.754058 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 15:44:55.758921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:44:55.773340 systemd[1]: Starting ensure-sysext.service... Sep 4 15:44:55.774994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 15:44:55.783867 systemd[1]: Reload requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Sep 4 15:44:55.783886 systemd[1]: Reloading... Sep 4 15:44:55.790625 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 15:44:55.790949 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 15:44:55.791269 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 15:44:55.791553 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 15:44:55.792278 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 15:44:55.792559 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Sep 4 15:44:55.792667 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Sep 4 15:44:55.797310 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 15:44:55.797415 systemd-tmpfiles[1298]: Skipping /boot Sep 4 15:44:55.803724 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 15:44:55.803860 systemd-tmpfiles[1298]: Skipping /boot Sep 4 15:44:55.831193 zram_generator::config[1328]: No configuration found. Sep 4 15:44:55.961511 systemd[1]: Reloading finished in 177 ms. Sep 4 15:44:55.971186 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 15:44:55.984997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:44:55.992251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 15:44:55.994413 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 15:44:56.004631 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 15:44:56.008396 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 15:44:56.010888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:44:56.013695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 15:44:56.017879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:44:56.019403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 15:44:56.022004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 15:44:56.029538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 15:44:56.030593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:44:56.030709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:44:56.031583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 15:44:56.035242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 15:44:56.037088 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 15:44:56.041803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:44:56.044185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 15:44:56.045520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:44:56.045633 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:44:56.048241 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 15:44:56.050197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 15:44:56.052819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 15:44:56.052994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 15:44:56.054867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 15:44:56.055031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 15:44:56.057520 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 15:44:56.065705 systemd[1]: Finished ensure-sysext.service. Sep 4 15:44:56.068207 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 15:44:56.070377 systemd-udevd[1372]: Using default interface naming scheme 'v257'. Sep 4 15:44:56.072217 augenrules[1401]: No rules Sep 4 15:44:56.072418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:44:56.073480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 15:44:56.074603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:44:56.074646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:44:56.074679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 15:44:56.074716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 15:44:56.076303 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 15:44:56.077368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 15:44:56.077716 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 15:44:56.077945 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 15:44:56.089764 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 15:44:56.089958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 15:44:56.092296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:44:56.096735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 15:44:56.176258 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 15:44:56.220331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 15:44:56.222562 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 15:44:56.242874 systemd-networkd[1422]: lo: Link UP Sep 4 15:44:56.242885 systemd-networkd[1422]: lo: Gained carrier Sep 4 15:44:56.244836 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 15:44:56.246294 systemd[1]: Reached target network.target - Network. Sep 4 15:44:56.249252 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 15:44:56.250391 systemd-networkd[1422]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:44:56.250402 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 15:44:56.251925 systemd-networkd[1422]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:44:56.251961 systemd-networkd[1422]: eth0: Link UP Sep 4 15:44:56.252350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 15:44:56.255703 systemd-networkd[1422]: eth0: Gained carrier Sep 4 15:44:56.255723 systemd-networkd[1422]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:44:56.269319 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 15:44:56.270267 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Sep 4 15:44:56.754679 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 15:44:56.754730 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-09-04 15:44:56.754485 UTC. Sep 4 15:44:56.755267 systemd-resolved[1225]: Clock change detected. Flushing caches. Sep 4 15:44:56.761038 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 15:44:56.769915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 15:44:56.772299 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 15:44:56.777961 ldconfig[1366]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 15:44:56.783773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 15:44:56.787098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 15:44:56.800824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 15:44:56.807045 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 15:44:56.808161 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 15:44:56.809088 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 15:44:56.810946 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 15:44:56.812028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 15:44:56.813248 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 15:44:56.816904 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 15:44:56.817854 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 15:44:56.817884 systemd[1]: Reached target paths.target - Path Units. Sep 4 15:44:56.818533 systemd[1]: Reached target timers.target - Timer Units. Sep 4 15:44:56.820055 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 15:44:56.822003 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 15:44:56.825373 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 15:44:56.827468 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 15:44:56.829807 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 15:44:56.839365 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 15:44:56.840482 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 15:44:56.844185 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 15:44:56.850839 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 15:44:56.851637 systemd[1]: Reached target basic.target - Basic System. Sep 4 15:44:56.852511 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 15:44:56.852541 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 15:44:56.853437 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 15:44:56.855155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 15:44:56.856751 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 15:44:56.861487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 15:44:56.864038 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 15:44:56.864795 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 15:44:56.865668 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 15:44:56.867346 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 15:44:56.868895 jq[1481]: false Sep 4 15:44:56.870700 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 15:44:56.873118 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 15:44:56.876530 extend-filesystems[1482]: Found /dev/vda6 Sep 4 15:44:56.877690 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 15:44:56.879433 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:44:56.879731 extend-filesystems[1482]: Found /dev/vda9 Sep 4 15:44:56.880288 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 15:44:56.880666 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 15:44:56.881206 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 15:44:56.883367 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 15:44:56.885810 extend-filesystems[1482]: Checking size of /dev/vda9 Sep 4 15:44:56.887753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 15:44:56.889992 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 15:44:56.890153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 15:44:56.890403 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 15:44:56.891781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 15:44:56.895835 jq[1498]: true Sep 4 15:44:56.898697 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 15:44:56.900821 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 15:44:56.905709 extend-filesystems[1482]: Resized partition /dev/vda9 Sep 4 15:44:56.907399 extend-filesystems[1522]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 15:44:56.909714 update_engine[1496]: I20250904 15:44:56.909426 1496 main.cc:92] Flatcar Update Engine starting Sep 4 15:44:56.910028 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 15:44:56.920769 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 15:44:56.938959 tar[1517]: linux-arm64/LICENSE Sep 4 15:44:56.939295 tar[1517]: linux-arm64/helm Sep 4 15:44:56.946433 jq[1523]: true Sep 4 15:44:56.952001 dbus-daemon[1479]: [system] SELinux support is enabled Sep 4 15:44:56.952156 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 15:44:56.957052 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 15:44:56.957085 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 15:44:56.958323 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 15:44:56.958350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 15:44:56.960661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:44:56.964567 update_engine[1496]: I20250904 15:44:56.964514 1496 update_check_scheduler.cc:74] Next update check in 11m58s Sep 4 15:44:56.966158 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 15:44:56.966707 systemd[1]: Started update-engine.service - Update Engine. Sep 4 15:44:56.977649 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 15:44:56.982228 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 15:44:56.982228 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 15:44:56.982228 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 15:44:56.988638 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Sep 4 15:44:56.983034 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 15:44:56.983246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 15:44:57.004605 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Sep 4 15:44:57.007843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 15:44:57.009233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 15:44:57.015826 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 15:44:57.016686 systemd-logind[1492]: New seat seat0. Sep 4 15:44:57.021537 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 15:44:57.035209 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 15:44:57.078422 containerd[1513]: time="2025-09-04T15:44:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 15:44:57.079001 containerd[1513]: time="2025-09-04T15:44:57.078970424Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 15:44:57.094148 containerd[1513]: time="2025-09-04T15:44:57.094105624Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.04µs" Sep 4 15:44:57.094148 containerd[1513]: time="2025-09-04T15:44:57.094144704Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 15:44:57.094242 containerd[1513]: time="2025-09-04T15:44:57.094167744Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 15:44:57.094338 containerd[1513]: time="2025-09-04T15:44:57.094316824Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 15:44:57.094369 containerd[1513]: time="2025-09-04T15:44:57.094338864Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 15:44:57.094391 containerd[1513]: time="2025-09-04T15:44:57.094376864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094451 containerd[1513]: time="2025-09-04T15:44:57.094430304Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094483 containerd[1513]: time="2025-09-04T15:44:57.094459744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094701 containerd[1513]: time="2025-09-04T15:44:57.094678024Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094728 containerd[1513]: time="2025-09-04T15:44:57.094702024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094728 containerd[1513]: time="2025-09-04T15:44:57.094714904Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094728 containerd[1513]: time="2025-09-04T15:44:57.094726184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 15:44:57.094832 containerd[1513]: time="2025-09-04T15:44:57.094811864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 15:44:57.096235 containerd[1513]: time="2025-09-04T15:44:57.096204264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 15:44:57.096273 containerd[1513]: time="2025-09-04T15:44:57.096251584Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 15:44:57.096273 containerd[1513]: time="2025-09-04T15:44:57.096263744Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 15:44:57.096313 containerd[1513]: time="2025-09-04T15:44:57.096292904Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 15:44:57.096530 containerd[1513]: time="2025-09-04T15:44:57.096511904Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 15:44:57.096593 containerd[1513]: time="2025-09-04T15:44:57.096577784Z" level=info msg="metadata content store policy set" policy=shared Sep 4 15:44:57.099370 containerd[1513]: time="2025-09-04T15:44:57.099339264Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 15:44:57.099412 containerd[1513]: time="2025-09-04T15:44:57.099397664Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 15:44:57.099430 containerd[1513]: time="2025-09-04T15:44:57.099411184Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 15:44:57.099430 containerd[1513]: time="2025-09-04T15:44:57.099422784Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 15:44:57.099462 containerd[1513]: time="2025-09-04T15:44:57.099434224Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 15:44:57.099478 containerd[1513]: time="2025-09-04T15:44:57.099471624Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 15:44:57.099495 containerd[1513]: time="2025-09-04T15:44:57.099485384Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 15:44:57.099525 containerd[1513]: time="2025-09-04T15:44:57.099497864Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 15:44:57.099525 containerd[1513]: time="2025-09-04T15:44:57.099509184Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 15:44:57.099525 containerd[1513]: time="2025-09-04T15:44:57.099518784Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 15:44:57.099572 containerd[1513]: time="2025-09-04T15:44:57.099527704Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 15:44:57.099572 containerd[1513]: time="2025-09-04T15:44:57.099539584Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099635344Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099677224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099692344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099702384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099711864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099721504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099731424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099754344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099765864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099775864Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099785744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099960224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.099973264Z" level=info msg="Start snapshots syncer" Sep 4 15:44:57.100743 containerd[1513]: time="2025-09-04T15:44:57.100000784Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 15:44:57.101010 containerd[1513]: time="2025-09-04T15:44:57.100213064Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 15:44:57.101010 containerd[1513]: time="2025-09-04T15:44:57.100256464Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100324824Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100426104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100456144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100466744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100477384Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100494864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100506224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100516944Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100537664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100548104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100557664Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100621384Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100635784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 15:44:57.101099 containerd[1513]: time="2025-09-04T15:44:57.100646584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100668984Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100676784Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100688664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100698424Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100792344Z" level=info msg="runtime interface created" Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100798344Z" level=info msg="created NRI interface" Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100808504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100818864Z" level=info msg="Connect containerd service" Sep 4 15:44:57.101315 containerd[1513]: time="2025-09-04T15:44:57.100843824Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 15:44:57.101616 containerd[1513]: time="2025-09-04T15:44:57.101587664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 15:44:57.171856 containerd[1513]: time="2025-09-04T15:44:57.171789784Z" level=info msg="Start subscribing containerd event" Sep 4 15:44:57.171944 containerd[1513]: time="2025-09-04T15:44:57.171871664Z" level=info msg="Start recovering state" Sep 4 15:44:57.171989 containerd[1513]: time="2025-09-04T15:44:57.171969224Z" level=info msg="Start event monitor" Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.171992864Z" level=info msg="Start cni network conf syncer for default" Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.172003824Z" level=info msg="Start streaming server" Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.172013544Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.172020224Z" level=info msg="runtime interface starting up..." Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.172025704Z" level=info msg="starting plugins..." Sep 4 15:44:57.172040 containerd[1513]: time="2025-09-04T15:44:57.172039504Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 15:44:57.172210 containerd[1513]: time="2025-09-04T15:44:57.172043784Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 15:44:57.172263 containerd[1513]: time="2025-09-04T15:44:57.172248024Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 15:44:57.172317 containerd[1513]: time="2025-09-04T15:44:57.172304024Z" level=info msg="containerd successfully booted in 0.094321s" Sep 4 15:44:57.172447 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 15:44:57.269063 tar[1517]: linux-arm64/README.md Sep 4 15:44:57.288768 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 15:44:58.531872 systemd-networkd[1422]: eth0: Gained IPv6LL Sep 4 15:44:58.537240 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 15:44:58.538956 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 15:44:58.541208 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 15:44:58.543624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:44:58.556974 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 15:44:58.579209 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 15:44:58.579754 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 15:44:58.583803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 15:44:58.585233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 15:44:58.634036 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 15:44:58.654805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 15:44:58.657258 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 15:44:58.671925 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 15:44:58.672108 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 15:44:58.674362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 15:44:58.695063 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 15:44:58.698628 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 15:44:58.701293 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 15:44:58.702656 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 15:44:59.094646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:44:59.096053 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 15:44:59.097136 systemd[1]: Startup finished in 1.987s (kernel) + 6.226s (initrd) + 3.838s (userspace) = 12.053s. Sep 4 15:44:59.098120 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 15:44:59.441057 kubelet[1624]: E0904 15:44:59.440992 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 15:44:59.443904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 15:44:59.444023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 15:44:59.444279 systemd[1]: kubelet.service: Consumed 750ms CPU time, 259.5M memory peak. Sep 4 15:45:01.621893 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 15:45:01.622892 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:51676.service - OpenSSH per-connection server daemon (10.0.0.1:51676). Sep 4 15:45:01.705488 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 51676 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:01.707028 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:01.712444 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 15:45:01.713257 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 15:45:01.718864 systemd-logind[1492]: New session 1 of user core. Sep 4 15:45:01.733017 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 15:45:01.735239 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 15:45:01.762338 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 15:45:01.764142 systemd-logind[1492]: New session c1 of user core. Sep 4 15:45:01.859729 systemd[1643]: Queued start job for default target default.target. Sep 4 15:45:01.879589 systemd[1643]: Created slice app.slice - User Application Slice. Sep 4 15:45:01.879617 systemd[1643]: Reached target paths.target - Paths. Sep 4 15:45:01.879651 systemd[1643]: Reached target timers.target - Timers. Sep 4 15:45:01.880774 systemd[1643]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 15:45:01.889394 systemd[1643]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 15:45:01.889449 systemd[1643]: Reached target sockets.target - Sockets. Sep 4 15:45:01.889482 systemd[1643]: Reached target basic.target - Basic System. Sep 4 15:45:01.889509 systemd[1643]: Reached target default.target - Main User Target. Sep 4 15:45:01.889530 systemd[1643]: Startup finished in 120ms. Sep 4 15:45:01.889700 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 15:45:01.890936 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 15:45:01.953566 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:51678.service - OpenSSH per-connection server daemon (10.0.0.1:51678). Sep 4 15:45:01.994519 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 51678 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:01.995648 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:01.999421 systemd-logind[1492]: New session 2 of user core. Sep 4 15:45:02.015893 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 15:45:02.065658 sshd[1657]: Connection closed by 10.0.0.1 port 51678 Sep 4 15:45:02.066077 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:02.083358 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:51678.service: Deactivated successfully. Sep 4 15:45:02.085856 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 15:45:02.086431 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Sep 4 15:45:02.088972 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:51694.service - OpenSSH per-connection server daemon (10.0.0.1:51694). Sep 4 15:45:02.089914 systemd-logind[1492]: Removed session 2. Sep 4 15:45:02.140544 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 51694 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:02.141520 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:02.145507 systemd-logind[1492]: New session 3 of user core. Sep 4 15:45:02.154859 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 15:45:02.201571 sshd[1666]: Connection closed by 10.0.0.1 port 51694 Sep 4 15:45:02.201932 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:02.211475 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:51694.service: Deactivated successfully. Sep 4 15:45:02.213833 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 15:45:02.214375 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Sep 4 15:45:02.216285 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:51710.service - OpenSSH per-connection server daemon (10.0.0.1:51710). Sep 4 15:45:02.216700 systemd-logind[1492]: Removed session 3. Sep 4 15:45:02.268835 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 51710 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:02.269887 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:02.273201 systemd-logind[1492]: New session 4 of user core. Sep 4 15:45:02.281868 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 15:45:02.331780 sshd[1676]: Connection closed by 10.0.0.1 port 51710 Sep 4 15:45:02.332086 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:02.341435 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:51710.service: Deactivated successfully. Sep 4 15:45:02.343861 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 15:45:02.344413 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Sep 4 15:45:02.346323 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:51716.service - OpenSSH per-connection server daemon (10.0.0.1:51716). Sep 4 15:45:02.346736 systemd-logind[1492]: Removed session 4. Sep 4 15:45:02.397041 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 51716 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:02.398236 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:02.401802 systemd-logind[1492]: New session 5 of user core. Sep 4 15:45:02.411869 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 15:45:02.467151 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 15:45:02.467403 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:45:02.484503 sudo[1686]: pam_unix(sudo:session): session closed for user root Sep 4 15:45:02.486001 sshd[1685]: Connection closed by 10.0.0.1 port 51716 Sep 4 15:45:02.486445 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:02.504453 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:51716.service: Deactivated successfully. Sep 4 15:45:02.505707 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 15:45:02.506326 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Sep 4 15:45:02.509126 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:51722.service - OpenSSH per-connection server daemon (10.0.0.1:51722). Sep 4 15:45:02.509728 systemd-logind[1492]: Removed session 5. Sep 4 15:45:02.565571 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 51722 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:02.566612 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:02.569819 systemd-logind[1492]: New session 6 of user core. Sep 4 15:45:02.580887 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 15:45:02.632138 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 15:45:02.632620 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:45:02.679296 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 4 15:45:02.685505 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 15:45:02.685778 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:45:02.693800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 15:45:02.731697 augenrules[1719]: No rules Sep 4 15:45:02.732677 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 15:45:02.733820 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 15:45:02.734843 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 4 15:45:02.736029 sshd[1695]: Connection closed by 10.0.0.1 port 51722 Sep 4 15:45:02.736369 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:02.747412 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:51722.service: Deactivated successfully. Sep 4 15:45:02.749892 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 15:45:02.750474 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Sep 4 15:45:02.752500 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:51730.service - OpenSSH per-connection server daemon (10.0.0.1:51730). Sep 4 15:45:02.753248 systemd-logind[1492]: Removed session 6. Sep 4 15:45:02.804719 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 51730 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:02.805708 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:02.809115 systemd-logind[1492]: New session 7 of user core. Sep 4 15:45:02.821868 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 15:45:02.871891 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 15:45:02.872132 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:45:03.132955 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 15:45:03.153979 (dockerd)[1752]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 15:45:03.346849 dockerd[1752]: time="2025-09-04T15:45:03.346786424Z" level=info msg="Starting up" Sep 4 15:45:03.347547 dockerd[1752]: time="2025-09-04T15:45:03.347527904Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 15:45:03.357106 dockerd[1752]: time="2025-09-04T15:45:03.357077104Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 15:45:03.488168 systemd[1]: var-lib-docker-metacopy\x2dcheck3190853029-merged.mount: Deactivated successfully. Sep 4 15:45:03.496955 dockerd[1752]: time="2025-09-04T15:45:03.496910344Z" level=info msg="Loading containers: start." Sep 4 15:45:03.506772 kernel: Initializing XFRM netlink socket Sep 4 15:45:03.685539 systemd-networkd[1422]: docker0: Link UP Sep 4 15:45:03.688398 dockerd[1752]: time="2025-09-04T15:45:03.688361144Z" level=info msg="Loading containers: done." Sep 4 15:45:03.703022 dockerd[1752]: time="2025-09-04T15:45:03.702978424Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 15:45:03.703136 dockerd[1752]: time="2025-09-04T15:45:03.703051544Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 15:45:03.703136 dockerd[1752]: time="2025-09-04T15:45:03.703121264Z" level=info msg="Initializing buildkit" Sep 4 15:45:03.721984 dockerd[1752]: time="2025-09-04T15:45:03.721956024Z" level=info msg="Completed buildkit initialization" Sep 4 15:45:03.728051 dockerd[1752]: time="2025-09-04T15:45:03.728015424Z" level=info msg="Daemon has completed initialization" Sep 4 15:45:03.728331 dockerd[1752]: time="2025-09-04T15:45:03.728098984Z" level=info msg="API listen on /run/docker.sock" Sep 4 15:45:03.728252 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 15:45:04.367515 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4220482310-merged.mount: Deactivated successfully. Sep 4 15:45:04.510896 containerd[1513]: time="2025-09-04T15:45:04.510857424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 15:45:05.190063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713830746.mount: Deactivated successfully. Sep 4 15:45:06.457787 containerd[1513]: time="2025-09-04T15:45:06.457723024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:06.458693 containerd[1513]: time="2025-09-04T15:45:06.458662304Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 4 15:45:06.459792 containerd[1513]: time="2025-09-04T15:45:06.459414224Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:06.461870 containerd[1513]: time="2025-09-04T15:45:06.461829704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:06.462863 containerd[1513]: time="2025-09-04T15:45:06.462821424Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.95192244s" Sep 4 15:45:06.462863 containerd[1513]: time="2025-09-04T15:45:06.462857784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 4 15:45:06.463999 containerd[1513]: time="2025-09-04T15:45:06.463976184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 15:45:07.901260 containerd[1513]: time="2025-09-04T15:45:07.901200984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:07.901688 containerd[1513]: time="2025-09-04T15:45:07.901645264Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 4 15:45:07.902454 containerd[1513]: time="2025-09-04T15:45:07.902422104Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:07.905055 containerd[1513]: time="2025-09-04T15:45:07.905026864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:07.906233 containerd[1513]: time="2025-09-04T15:45:07.906020744Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.442012s" Sep 4 15:45:07.906233 containerd[1513]: time="2025-09-04T15:45:07.906057024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 4 15:45:07.906473 containerd[1513]: time="2025-09-04T15:45:07.906448384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 15:45:09.158793 containerd[1513]: time="2025-09-04T15:45:09.158397784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:09.159193 containerd[1513]: time="2025-09-04T15:45:09.159129944Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 4 15:45:09.159634 containerd[1513]: time="2025-09-04T15:45:09.159586464Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:09.162390 containerd[1513]: time="2025-09-04T15:45:09.161908384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:09.163615 containerd[1513]: time="2025-09-04T15:45:09.163588464Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.2571122s" Sep 4 15:45:09.163667 containerd[1513]: time="2025-09-04T15:45:09.163622144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 4 15:45:09.164088 containerd[1513]: time="2025-09-04T15:45:09.164041024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 15:45:09.573851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 15:45:09.575269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:45:09.694274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:09.697426 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 15:45:09.732025 kubelet[2041]: E0904 15:45:09.731984 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 15:45:09.735179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 15:45:09.735309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 15:45:09.735820 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107M memory peak. Sep 4 15:45:10.300267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250007195.mount: Deactivated successfully. Sep 4 15:45:10.678959 containerd[1513]: time="2025-09-04T15:45:10.678917664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:10.679533 containerd[1513]: time="2025-09-04T15:45:10.679511544Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 4 15:45:10.680198 containerd[1513]: time="2025-09-04T15:45:10.680173824Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:10.682222 containerd[1513]: time="2025-09-04T15:45:10.682168344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:10.682840 containerd[1513]: time="2025-09-04T15:45:10.682811544Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.51874372s" Sep 4 15:45:10.682840 containerd[1513]: time="2025-09-04T15:45:10.682843624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 4 15:45:10.683342 containerd[1513]: time="2025-09-04T15:45:10.683310904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 15:45:11.193346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008246063.mount: Deactivated successfully. Sep 4 15:45:11.990486 containerd[1513]: time="2025-09-04T15:45:11.990417504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:11.990899 containerd[1513]: time="2025-09-04T15:45:11.990855584Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 4 15:45:11.991575 containerd[1513]: time="2025-09-04T15:45:11.991539344Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:11.994219 containerd[1513]: time="2025-09-04T15:45:11.994178104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:11.996155 containerd[1513]: time="2025-09-04T15:45:11.996036824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.31268668s" Sep 4 15:45:11.996155 containerd[1513]: time="2025-09-04T15:45:11.996073344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 4 15:45:11.996590 containerd[1513]: time="2025-09-04T15:45:11.996565464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 15:45:12.421384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345956680.mount: Deactivated successfully. Sep 4 15:45:12.426044 containerd[1513]: time="2025-09-04T15:45:12.425994864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:45:12.426415 containerd[1513]: time="2025-09-04T15:45:12.426385664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 4 15:45:12.427270 containerd[1513]: time="2025-09-04T15:45:12.427230744Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:45:12.429164 containerd[1513]: time="2025-09-04T15:45:12.429126544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:45:12.429691 containerd[1513]: time="2025-09-04T15:45:12.429656904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 433.05872ms" Sep 4 15:45:12.429728 containerd[1513]: time="2025-09-04T15:45:12.429690064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 15:45:12.430239 containerd[1513]: time="2025-09-04T15:45:12.430213624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 15:45:12.875389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604936318.mount: Deactivated successfully. Sep 4 15:45:15.063793 containerd[1513]: time="2025-09-04T15:45:15.063054984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:15.064156 containerd[1513]: time="2025-09-04T15:45:15.063974104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 4 15:45:15.064580 containerd[1513]: time="2025-09-04T15:45:15.064535624Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:15.228361 containerd[1513]: time="2025-09-04T15:45:15.228289064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:15.229509 containerd[1513]: time="2025-09-04T15:45:15.229472824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.799225s" Sep 4 15:45:15.229509 containerd[1513]: time="2025-09-04T15:45:15.229503464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 4 15:45:19.443560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:19.443690 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107M memory peak. Sep 4 15:45:19.445440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:45:19.463872 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Sep 4 15:45:19.463972 systemd[1]: Reloading... Sep 4 15:45:19.533778 zram_generator::config[2252]: No configuration found. Sep 4 15:45:19.709378 systemd[1]: Reloading finished in 245 ms. Sep 4 15:45:19.791267 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 15:45:19.791348 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 15:45:19.791568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:19.791608 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95.1M memory peak. Sep 4 15:45:19.792925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:45:19.899477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:19.902807 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 15:45:19.935252 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:45:19.935252 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 15:45:19.935252 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:45:19.935553 kubelet[2291]: I0904 15:45:19.935287 2291 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 15:45:20.690769 kubelet[2291]: I0904 15:45:20.689648 2291 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 15:45:20.690769 kubelet[2291]: I0904 15:45:20.689679 2291 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 15:45:20.690769 kubelet[2291]: I0904 15:45:20.689901 2291 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 15:45:20.709561 kubelet[2291]: E0904 15:45:20.709529 2291 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 15:45:20.712439 kubelet[2291]: I0904 15:45:20.712404 2291 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 15:45:20.720909 kubelet[2291]: I0904 15:45:20.720886 2291 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 15:45:20.724285 kubelet[2291]: I0904 15:45:20.723705 2291 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 15:45:20.725608 kubelet[2291]: I0904 15:45:20.725562 2291 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 15:45:20.725842 kubelet[2291]: I0904 15:45:20.725606 2291 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 15:45:20.725980 kubelet[2291]: I0904 15:45:20.725913 2291 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 15:45:20.725980 kubelet[2291]: I0904 15:45:20.725923 2291 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 15:45:20.726122 kubelet[2291]: I0904 15:45:20.726103 2291 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:45:20.729851 kubelet[2291]: I0904 15:45:20.729826 2291 kubelet.go:480] "Attempting to sync node with API server" Sep 4 15:45:20.729938 kubelet[2291]: I0904 15:45:20.729928 2291 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 15:45:20.730026 kubelet[2291]: I0904 15:45:20.730016 2291 kubelet.go:386] "Adding apiserver pod source" Sep 4 15:45:20.730083 kubelet[2291]: I0904 15:45:20.730074 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 15:45:20.731276 kubelet[2291]: E0904 15:45:20.731244 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 15:45:20.732107 kubelet[2291]: I0904 15:45:20.732040 2291 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 15:45:20.732308 kubelet[2291]: E0904 15:45:20.732281 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 15:45:20.732788 kubelet[2291]: I0904 15:45:20.732766 2291 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 15:45:20.732897 kubelet[2291]: W0904 15:45:20.732885 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 15:45:20.734982 kubelet[2291]: I0904 15:45:20.734962 2291 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 15:45:20.735052 kubelet[2291]: I0904 15:45:20.735011 2291 server.go:1289] "Started kubelet" Sep 4 15:45:20.735213 kubelet[2291]: I0904 15:45:20.735103 2291 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 15:45:20.738120 kubelet[2291]: I0904 15:45:20.737471 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 15:45:20.738120 kubelet[2291]: I0904 15:45:20.737662 2291 server.go:317] "Adding debug handlers to kubelet server" Sep 4 15:45:20.738120 kubelet[2291]: I0904 15:45:20.737772 2291 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 15:45:20.739803 kubelet[2291]: I0904 15:45:20.739203 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 15:45:20.739803 kubelet[2291]: I0904 15:45:20.739307 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 15:45:20.739985 kubelet[2291]: E0904 15:45:20.738700 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18621ed8b5838678 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 15:45:20.734979704 +0000 UTC m=+0.827722561,LastTimestamp:2025-09-04 15:45:20.734979704 +0000 UTC m=+0.827722561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 15:45:20.740078 kubelet[2291]: E0904 15:45:20.740011 2291 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 15:45:20.740078 kubelet[2291]: I0904 15:45:20.740039 2291 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 15:45:20.740459 kubelet[2291]: I0904 15:45:20.740194 2291 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 15:45:20.740459 kubelet[2291]: I0904 15:45:20.740248 2291 reconciler.go:26] "Reconciler: start to sync state" Sep 4 15:45:20.741126 kubelet[2291]: E0904 15:45:20.740710 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 15:45:20.741126 kubelet[2291]: E0904 15:45:20.740862 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Sep 4 15:45:20.741251 kubelet[2291]: I0904 15:45:20.741145 2291 factory.go:223] Registration of the systemd container factory successfully Sep 4 15:45:20.741276 kubelet[2291]: I0904 15:45:20.741247 2291 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 15:45:20.742150 kubelet[2291]: E0904 15:45:20.742118 2291 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 15:45:20.742248 kubelet[2291]: I0904 15:45:20.742149 2291 factory.go:223] Registration of the containerd container factory successfully Sep 4 15:45:20.744109 kubelet[2291]: I0904 15:45:20.744072 2291 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 15:45:20.752647 kubelet[2291]: I0904 15:45:20.752296 2291 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 15:45:20.752647 kubelet[2291]: I0904 15:45:20.752311 2291 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 15:45:20.752647 kubelet[2291]: I0904 15:45:20.752338 2291 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:45:20.797156 kubelet[2291]: I0904 15:45:20.797130 2291 policy_none.go:49] "None policy: Start" Sep 4 15:45:20.797297 kubelet[2291]: I0904 15:45:20.797285 2291 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 15:45:20.797387 kubelet[2291]: I0904 15:45:20.797377 2291 state_mem.go:35] "Initializing new in-memory state store" Sep 4 15:45:20.800947 kubelet[2291]: I0904 15:45:20.799971 2291 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 15:45:20.800947 kubelet[2291]: I0904 15:45:20.800946 2291 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 15:45:20.801049 kubelet[2291]: I0904 15:45:20.800977 2291 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 15:45:20.801049 kubelet[2291]: I0904 15:45:20.800984 2291 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 15:45:20.801049 kubelet[2291]: E0904 15:45:20.801027 2291 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 15:45:20.801883 kubelet[2291]: E0904 15:45:20.801851 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 15:45:20.804430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 15:45:20.818290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 15:45:20.821337 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 15:45:20.832529 kubelet[2291]: E0904 15:45:20.832492 2291 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 15:45:20.832705 kubelet[2291]: I0904 15:45:20.832673 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 15:45:20.832738 kubelet[2291]: I0904 15:45:20.832695 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 15:45:20.833147 kubelet[2291]: I0904 15:45:20.832929 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 15:45:20.833604 kubelet[2291]: E0904 15:45:20.833558 2291 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 15:45:20.833604 kubelet[2291]: E0904 15:45:20.833603 2291 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 15:45:20.911082 systemd[1]: Created slice kubepods-burstable-pod3b023661b5e3678365651c1ddb249ea5.slice - libcontainer container kubepods-burstable-pod3b023661b5e3678365651c1ddb249ea5.slice. Sep 4 15:45:20.933908 kubelet[2291]: I0904 15:45:20.933884 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 15:45:20.934328 kubelet[2291]: E0904 15:45:20.934285 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 4 15:45:20.936244 kubelet[2291]: E0904 15:45:20.936041 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:20.938511 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 4 15:45:20.940477 kubelet[2291]: E0904 15:45:20.940188 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:20.940567 kubelet[2291]: I0904 15:45:20.940532 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:20.940595 kubelet[2291]: I0904 15:45:20.940574 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:20.940615 kubelet[2291]: I0904 15:45:20.940593 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:20.940615 kubelet[2291]: I0904 15:45:20.940611 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:20.940659 kubelet[2291]: I0904 15:45:20.940625 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:20.940659 kubelet[2291]: I0904 15:45:20.940638 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:20.942140 kubelet[2291]: E0904 15:45:20.942029 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Sep 4 15:45:20.942628 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 4 15:45:20.944537 kubelet[2291]: E0904 15:45:20.943920 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:21.041134 kubelet[2291]: I0904 15:45:21.041082 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:21.041235 kubelet[2291]: I0904 15:45:21.041144 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:21.041261 kubelet[2291]: I0904 15:45:21.041229 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:21.136355 kubelet[2291]: I0904 15:45:21.136325 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 15:45:21.136737 kubelet[2291]: E0904 15:45:21.136696 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 4 15:45:21.236856 kubelet[2291]: E0904 15:45:21.236759 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.237665 containerd[1513]: time="2025-09-04T15:45:21.237384384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b023661b5e3678365651c1ddb249ea5,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:21.241415 kubelet[2291]: E0904 15:45:21.241339 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.241839 containerd[1513]: time="2025-09-04T15:45:21.241696784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:21.245178 kubelet[2291]: E0904 15:45:21.245126 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.245515 containerd[1513]: time="2025-09-04T15:45:21.245487024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:21.267276 containerd[1513]: time="2025-09-04T15:45:21.266911744Z" level=info msg="connecting to shim f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa" address="unix:///run/containerd/s/33222675fc8c9b8a32b47e9029b59ef4021e6b3d59a1974c7d99dcb6910ab513" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:21.267988 containerd[1513]: time="2025-09-04T15:45:21.267954304Z" level=info msg="connecting to shim 4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668" address="unix:///run/containerd/s/fd7c31527b8d471bee8b34f570dce977b04fd3da221c1a032c027b22f5986b90" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:21.278479 containerd[1513]: time="2025-09-04T15:45:21.277706144Z" level=info msg="connecting to shim 96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79" address="unix:///run/containerd/s/63937e0e12b02d04ea3ba2e6cf86fd40d57262eb82e346b31b8c9bf4afaa88e3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:21.300960 systemd[1]: Started cri-containerd-4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668.scope - libcontainer container 4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668. Sep 4 15:45:21.304450 systemd[1]: Started cri-containerd-96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79.scope - libcontainer container 96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79. Sep 4 15:45:21.305903 systemd[1]: Started cri-containerd-f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa.scope - libcontainer container f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa. Sep 4 15:45:21.342623 kubelet[2291]: E0904 15:45:21.342583 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Sep 4 15:45:21.343970 containerd[1513]: time="2025-09-04T15:45:21.343905704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668\"" Sep 4 15:45:21.344849 containerd[1513]: time="2025-09-04T15:45:21.344812904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b023661b5e3678365651c1ddb249ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa\"" Sep 4 15:45:21.345521 kubelet[2291]: E0904 15:45:21.345251 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.345521 kubelet[2291]: E0904 15:45:21.345326 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.349720 containerd[1513]: time="2025-09-04T15:45:21.349681384Z" level=info msg="CreateContainer within sandbox \"f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 15:45:21.350812 containerd[1513]: time="2025-09-04T15:45:21.350727104Z" level=info msg="CreateContainer within sandbox \"4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 15:45:21.351611 containerd[1513]: time="2025-09-04T15:45:21.351578784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79\"" Sep 4 15:45:21.352449 kubelet[2291]: E0904 15:45:21.352420 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.360402 containerd[1513]: time="2025-09-04T15:45:21.360369784Z" level=info msg="CreateContainer within sandbox \"96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 15:45:21.362544 containerd[1513]: time="2025-09-04T15:45:21.362506704Z" level=info msg="Container edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:21.367702 containerd[1513]: time="2025-09-04T15:45:21.367655384Z" level=info msg="Container 2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:21.377259 containerd[1513]: time="2025-09-04T15:45:21.377217384Z" level=info msg="Container 6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:21.377723 containerd[1513]: time="2025-09-04T15:45:21.377681784Z" level=info msg="CreateContainer within sandbox \"f971534861ef41aebd8803060814fc1fea6f7e7a132ccdab55931c7f1e50d8fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2\"" Sep 4 15:45:21.381613 containerd[1513]: time="2025-09-04T15:45:21.381579344Z" level=info msg="StartContainer for \"edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2\"" Sep 4 15:45:21.382918 containerd[1513]: time="2025-09-04T15:45:21.382891624Z" level=info msg="connecting to shim edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2" address="unix:///run/containerd/s/33222675fc8c9b8a32b47e9029b59ef4021e6b3d59a1974c7d99dcb6910ab513" protocol=ttrpc version=3 Sep 4 15:45:21.384112 containerd[1513]: time="2025-09-04T15:45:21.384077544Z" level=info msg="CreateContainer within sandbox \"4ca9ce42f9e6f4ac3935420723ebdefbd2788fa7bb26bda23666eb2313a5d668\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c\"" Sep 4 15:45:21.384707 containerd[1513]: time="2025-09-04T15:45:21.384564864Z" level=info msg="StartContainer for \"2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c\"" Sep 4 15:45:21.388041 containerd[1513]: time="2025-09-04T15:45:21.387991704Z" level=info msg="connecting to shim 2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c" address="unix:///run/containerd/s/fd7c31527b8d471bee8b34f570dce977b04fd3da221c1a032c027b22f5986b90" protocol=ttrpc version=3 Sep 4 15:45:21.389239 containerd[1513]: time="2025-09-04T15:45:21.389191424Z" level=info msg="CreateContainer within sandbox \"96dbfa00a14fd3740d0768054c2d7222ecd96560666c92f59840e62929793f79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72\"" Sep 4 15:45:21.389970 containerd[1513]: time="2025-09-04T15:45:21.389934664Z" level=info msg="StartContainer for \"6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72\"" Sep 4 15:45:21.391365 containerd[1513]: time="2025-09-04T15:45:21.391334384Z" level=info msg="connecting to shim 6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72" address="unix:///run/containerd/s/63937e0e12b02d04ea3ba2e6cf86fd40d57262eb82e346b31b8c9bf4afaa88e3" protocol=ttrpc version=3 Sep 4 15:45:21.407980 systemd[1]: Started cri-containerd-edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2.scope - libcontainer container edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2. Sep 4 15:45:21.412190 systemd[1]: Started cri-containerd-2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c.scope - libcontainer container 2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c. Sep 4 15:45:21.413467 systemd[1]: Started cri-containerd-6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72.scope - libcontainer container 6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72. Sep 4 15:45:21.452919 containerd[1513]: time="2025-09-04T15:45:21.452882784Z" level=info msg="StartContainer for \"edbf22b21eeb771589836fb2a41eee3b2b69a794978621c6da2b5727ed022cd2\" returns successfully" Sep 4 15:45:21.453376 containerd[1513]: time="2025-09-04T15:45:21.453308944Z" level=info msg="StartContainer for \"2e5bebdd41ad50a7416757136ae69f2882ddfb121ef398be1b7d071b23b0854c\" returns successfully" Sep 4 15:45:21.472548 containerd[1513]: time="2025-09-04T15:45:21.472493984Z" level=info msg="StartContainer for \"6bbf1eba2fcefd4998c0c100edae0e14c529fdab57271dfa54929633e209ad72\" returns successfully" Sep 4 15:45:21.539714 kubelet[2291]: I0904 15:45:21.539495 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 15:45:21.540722 kubelet[2291]: E0904 15:45:21.540684 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 4 15:45:21.808796 kubelet[2291]: E0904 15:45:21.808561 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:21.808796 kubelet[2291]: E0904 15:45:21.808687 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.809728 kubelet[2291]: E0904 15:45:21.809679 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:21.810841 kubelet[2291]: E0904 15:45:21.809813 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:21.812361 kubelet[2291]: E0904 15:45:21.812339 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:21.812469 kubelet[2291]: E0904 15:45:21.812441 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:22.342332 kubelet[2291]: I0904 15:45:22.342290 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 15:45:22.816639 kubelet[2291]: E0904 15:45:22.815576 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:22.816639 kubelet[2291]: E0904 15:45:22.815696 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:22.816639 kubelet[2291]: E0904 15:45:22.816048 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:22.816639 kubelet[2291]: E0904 15:45:22.816490 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:23.286129 kubelet[2291]: E0904 15:45:23.286090 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 15:45:23.286256 kubelet[2291]: E0904 15:45:23.286238 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:23.388113 kubelet[2291]: E0904 15:45:23.388076 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 15:45:23.465928 kubelet[2291]: I0904 15:45:23.465874 2291 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 15:45:23.465928 kubelet[2291]: E0904 15:45:23.465918 2291 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 15:45:23.540727 kubelet[2291]: I0904 15:45:23.540622 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:23.545546 kubelet[2291]: E0904 15:45:23.545493 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:23.545546 kubelet[2291]: I0904 15:45:23.545521 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:23.547186 kubelet[2291]: E0904 15:45:23.547150 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:23.547186 kubelet[2291]: I0904 15:45:23.547173 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:23.549100 kubelet[2291]: E0904 15:45:23.549069 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:23.732188 kubelet[2291]: I0904 15:45:23.732148 2291 apiserver.go:52] "Watching apiserver" Sep 4 15:45:23.740653 kubelet[2291]: I0904 15:45:23.740622 2291 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 15:45:25.350702 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-7.scope)... Sep 4 15:45:25.350718 systemd[1]: Reloading... Sep 4 15:45:25.415776 zram_generator::config[2618]: No configuration found. Sep 4 15:45:25.582414 systemd[1]: Reloading finished in 231 ms. Sep 4 15:45:25.611069 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:45:25.628554 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 15:45:25.628791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:25.628845 systemd[1]: kubelet.service: Consumed 1.184s CPU time, 125.7M memory peak. Sep 4 15:45:25.630432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:45:25.769120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:45:25.775525 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 15:45:25.817631 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:45:25.817631 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 15:45:25.817631 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:45:25.817974 kubelet[2660]: I0904 15:45:25.817663 2660 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 15:45:25.822637 kubelet[2660]: I0904 15:45:25.822591 2660 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 15:45:25.822637 kubelet[2660]: I0904 15:45:25.822631 2660 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 15:45:25.822859 kubelet[2660]: I0904 15:45:25.822833 2660 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 15:45:25.823968 kubelet[2660]: I0904 15:45:25.823943 2660 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 15:45:25.826132 kubelet[2660]: I0904 15:45:25.826109 2660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 15:45:25.830922 kubelet[2660]: I0904 15:45:25.830902 2660 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 15:45:25.833348 kubelet[2660]: I0904 15:45:25.833325 2660 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 15:45:25.833534 kubelet[2660]: I0904 15:45:25.833498 2660 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 15:45:25.833663 kubelet[2660]: I0904 15:45:25.833524 2660 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 15:45:25.833734 kubelet[2660]: I0904 15:45:25.833666 2660 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 15:45:25.833734 kubelet[2660]: I0904 15:45:25.833674 2660 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 15:45:25.833734 kubelet[2660]: I0904 15:45:25.833716 2660 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:45:25.834082 kubelet[2660]: I0904 15:45:25.833863 2660 kubelet.go:480] "Attempting to sync node with API server" Sep 4 15:45:25.834082 kubelet[2660]: I0904 15:45:25.833876 2660 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 15:45:25.834082 kubelet[2660]: I0904 15:45:25.833902 2660 kubelet.go:386] "Adding apiserver pod source" Sep 4 15:45:25.834082 kubelet[2660]: I0904 15:45:25.833916 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 15:45:25.835943 kubelet[2660]: I0904 15:45:25.835923 2660 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 15:45:25.838777 kubelet[2660]: I0904 15:45:25.837564 2660 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 15:45:25.842809 kubelet[2660]: I0904 15:45:25.842019 2660 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 15:45:25.842809 kubelet[2660]: I0904 15:45:25.842053 2660 server.go:1289] "Started kubelet" Sep 4 15:45:25.843579 kubelet[2660]: I0904 15:45:25.843564 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 15:45:25.851878 kubelet[2660]: I0904 15:45:25.851836 2660 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 15:45:25.852777 kubelet[2660]: I0904 15:45:25.852757 2660 server.go:317] "Adding debug handlers to kubelet server" Sep 4 15:45:25.854385 kubelet[2660]: I0904 15:45:25.854366 2660 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 15:45:25.857234 kubelet[2660]: I0904 15:45:25.857218 2660 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 15:45:25.857349 kubelet[2660]: E0904 15:45:25.856650 2660 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 15:45:25.857416 kubelet[2660]: I0904 15:45:25.856673 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 15:45:25.857882 kubelet[2660]: I0904 15:45:25.854358 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 15:45:25.858154 kubelet[2660]: I0904 15:45:25.858134 2660 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 15:45:25.858365 kubelet[2660]: I0904 15:45:25.858349 2660 reconciler.go:26] "Reconciler: start to sync state" Sep 4 15:45:25.859764 kubelet[2660]: I0904 15:45:25.859734 2660 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 15:45:25.860800 kubelet[2660]: I0904 15:45:25.860780 2660 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 15:45:25.860882 kubelet[2660]: I0904 15:45:25.860872 2660 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 15:45:25.860940 kubelet[2660]: I0904 15:45:25.860931 2660 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 15:45:25.861004 kubelet[2660]: I0904 15:45:25.860996 2660 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 15:45:25.861785 kubelet[2660]: E0904 15:45:25.861119 2660 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 15:45:25.863841 kubelet[2660]: E0904 15:45:25.862633 2660 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 15:45:25.863841 kubelet[2660]: I0904 15:45:25.863184 2660 factory.go:223] Registration of the containerd container factory successfully Sep 4 15:45:25.863841 kubelet[2660]: I0904 15:45:25.863197 2660 factory.go:223] Registration of the systemd container factory successfully Sep 4 15:45:25.863841 kubelet[2660]: I0904 15:45:25.863277 2660 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 15:45:25.890499 kubelet[2660]: I0904 15:45:25.890480 2660 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 15:45:25.890619 kubelet[2660]: I0904 15:45:25.890605 2660 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 15:45:25.890687 kubelet[2660]: I0904 15:45:25.890679 2660 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:45:25.890874 kubelet[2660]: I0904 15:45:25.890855 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 15:45:25.890941 kubelet[2660]: I0904 15:45:25.890922 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 15:45:25.890994 kubelet[2660]: I0904 15:45:25.890985 2660 policy_none.go:49] "None policy: Start" Sep 4 15:45:25.891044 kubelet[2660]: I0904 15:45:25.891036 2660 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 15:45:25.891094 kubelet[2660]: I0904 15:45:25.891086 2660 state_mem.go:35] "Initializing new in-memory state store" Sep 4 15:45:25.891225 kubelet[2660]: I0904 15:45:25.891211 2660 state_mem.go:75] "Updated machine memory state" Sep 4 15:45:25.894370 kubelet[2660]: E0904 15:45:25.894351 2660 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 15:45:25.894733 kubelet[2660]: I0904 15:45:25.894719 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 15:45:25.894733 kubelet[2660]: I0904 15:45:25.894776 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 15:45:25.895155 kubelet[2660]: I0904 15:45:25.894978 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 15:45:25.895728 kubelet[2660]: E0904 15:45:25.895694 2660 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 15:45:25.961800 kubelet[2660]: I0904 15:45:25.961764 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:25.962093 kubelet[2660]: I0904 15:45:25.961833 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:25.962181 kubelet[2660]: I0904 15:45:25.961888 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:25.999711 kubelet[2660]: I0904 15:45:25.999678 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 15:45:26.007219 kubelet[2660]: I0904 15:45:26.007195 2660 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 15:45:26.007353 kubelet[2660]: I0904 15:45:26.007264 2660 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 15:45:26.060044 kubelet[2660]: I0904 15:45:26.059986 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.060044 kubelet[2660]: I0904 15:45:26.060024 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.060044 kubelet[2660]: I0904 15:45:26.060043 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.060249 kubelet[2660]: I0904 15:45:26.060063 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.060249 kubelet[2660]: I0904 15:45:26.060086 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:26.060249 kubelet[2660]: I0904 15:45:26.060100 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:26.060249 kubelet[2660]: I0904 15:45:26.060112 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:26.060249 kubelet[2660]: I0904 15:45:26.060127 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b023661b5e3678365651c1ddb249ea5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b023661b5e3678365651c1ddb249ea5\") " pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:26.060421 kubelet[2660]: I0904 15:45:26.060141 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.267627 kubelet[2660]: E0904 15:45:26.267509 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.267627 kubelet[2660]: E0904 15:45:26.267543 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.267776 kubelet[2660]: E0904 15:45:26.267655 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.350862 sudo[2703]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 15:45:26.351146 sudo[2703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 15:45:26.677258 sudo[2703]: pam_unix(sudo:session): session closed for user root Sep 4 15:45:26.834567 kubelet[2660]: I0904 15:45:26.834344 2660 apiserver.go:52] "Watching apiserver" Sep 4 15:45:26.858411 kubelet[2660]: I0904 15:45:26.858356 2660 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 15:45:26.876094 kubelet[2660]: I0904 15:45:26.876062 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.876170 kubelet[2660]: I0904 15:45:26.876155 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:26.877289 kubelet[2660]: I0904 15:45:26.877270 2660 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:26.907795 kubelet[2660]: E0904 15:45:26.907732 2660 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 15:45:26.907939 kubelet[2660]: E0904 15:45:26.907921 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.919143 kubelet[2660]: E0904 15:45:26.919095 2660 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 15:45:26.919258 kubelet[2660]: E0904 15:45:26.919238 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.922544 kubelet[2660]: E0904 15:45:26.921931 2660 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 15:45:26.922544 kubelet[2660]: E0904 15:45:26.922049 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:26.936496 kubelet[2660]: I0904 15:45:26.936361 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.936345464 podStartE2EDuration="1.936345464s" podCreationTimestamp="2025-09-04 15:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:26.922207344 +0000 UTC m=+1.141009521" watchObservedRunningTime="2025-09-04 15:45:26.936345464 +0000 UTC m=+1.155147641" Sep 4 15:45:26.936496 kubelet[2660]: I0904 15:45:26.936472 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.936466944 podStartE2EDuration="1.936466944s" podCreationTimestamp="2025-09-04 15:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:26.936059424 +0000 UTC m=+1.154861561" watchObservedRunningTime="2025-09-04 15:45:26.936466944 +0000 UTC m=+1.155269121" Sep 4 15:45:26.961175 kubelet[2660]: I0904 15:45:26.960959 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.960945304 podStartE2EDuration="1.960945304s" podCreationTimestamp="2025-09-04 15:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:26.949679824 +0000 UTC m=+1.168482001" watchObservedRunningTime="2025-09-04 15:45:26.960945304 +0000 UTC m=+1.179747441" Sep 4 15:45:27.879128 kubelet[2660]: E0904 15:45:27.878379 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:27.879128 kubelet[2660]: E0904 15:45:27.878432 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:27.879128 kubelet[2660]: E0904 15:45:27.878941 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:28.170153 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 4 15:45:28.172337 sshd[1731]: Connection closed by 10.0.0.1 port 51730 Sep 4 15:45:28.172023 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:28.176563 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:51730.service: Deactivated successfully. Sep 4 15:45:28.180322 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 15:45:28.180550 systemd[1]: session-7.scope: Consumed 5.994s CPU time, 262.6M memory peak. Sep 4 15:45:28.182026 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Sep 4 15:45:28.184013 systemd-logind[1492]: Removed session 7. Sep 4 15:45:28.880820 kubelet[2660]: E0904 15:45:28.880788 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:28.882138 kubelet[2660]: E0904 15:45:28.881333 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:31.447124 kubelet[2660]: I0904 15:45:31.446980 2660 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 15:45:31.448113 kubelet[2660]: I0904 15:45:31.447518 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 15:45:31.448151 containerd[1513]: time="2025-09-04T15:45:31.447338850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 15:45:31.554990 kubelet[2660]: E0904 15:45:31.554964 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:31.886319 kubelet[2660]: E0904 15:45:31.885355 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.418801 systemd[1]: Created slice kubepods-besteffort-pod91176b7b_39d8_4ac1_be7d_d9126dc3708b.slice - libcontainer container kubepods-besteffort-pod91176b7b_39d8_4ac1_be7d_d9126dc3708b.slice. Sep 4 15:45:32.435073 systemd[1]: Created slice kubepods-burstable-pod27651188_3b4b_4eb2_8466_ba9fb7517b90.slice - libcontainer container kubepods-burstable-pod27651188_3b4b_4eb2_8466_ba9fb7517b90.slice. Sep 4 15:45:32.505327 systemd[1]: Created slice kubepods-besteffort-pod02017f16_df60_4b45_844e_d767fef4ff7d.slice - libcontainer container kubepods-besteffort-pod02017f16_df60_4b45_844e_d767fef4ff7d.slice. Sep 4 15:45:32.508043 kubelet[2660]: I0904 15:45:32.507991 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91176b7b-39d8-4ac1-be7d-d9126dc3708b-xtables-lock\") pod \"kube-proxy-rwkmt\" (UID: \"91176b7b-39d8-4ac1-be7d-d9126dc3708b\") " pod="kube-system/kube-proxy-rwkmt" Sep 4 15:45:32.508043 kubelet[2660]: I0904 15:45:32.508034 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-run\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508058 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27651188-3b4b-4eb2-8466-ba9fb7517b90-clustermesh-secrets\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508073 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-bpf-maps\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508087 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-cgroup\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508102 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cni-path\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508116 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-etc-cni-netd\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508348 kubelet[2660]: I0904 15:45:32.508130 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-lib-modules\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508146 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-hubble-tls\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508160 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91176b7b-39d8-4ac1-be7d-d9126dc3708b-lib-modules\") pod \"kube-proxy-rwkmt\" (UID: \"91176b7b-39d8-4ac1-be7d-d9126dc3708b\") " pod="kube-system/kube-proxy-rwkmt" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508174 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91176b7b-39d8-4ac1-be7d-d9126dc3708b-kube-proxy\") pod \"kube-proxy-rwkmt\" (UID: \"91176b7b-39d8-4ac1-be7d-d9126dc3708b\") " pod="kube-system/kube-proxy-rwkmt" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508187 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-hostproc\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508201 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-net\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508478 kubelet[2660]: I0904 15:45:32.508217 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-kernel\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508589 kubelet[2660]: I0904 15:45:32.508232 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqtw\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-kube-api-access-ncqtw\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508589 kubelet[2660]: I0904 15:45:32.508247 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhfjm\" (UniqueName: \"kubernetes.io/projected/91176b7b-39d8-4ac1-be7d-d9126dc3708b-kube-api-access-mhfjm\") pod \"kube-proxy-rwkmt\" (UID: \"91176b7b-39d8-4ac1-be7d-d9126dc3708b\") " pod="kube-system/kube-proxy-rwkmt" Sep 4 15:45:32.508589 kubelet[2660]: I0904 15:45:32.508264 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-xtables-lock\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.508589 kubelet[2660]: I0904 15:45:32.508279 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-config-path\") pod \"cilium-qpndf\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " pod="kube-system/cilium-qpndf" Sep 4 15:45:32.609032 kubelet[2660]: I0904 15:45:32.608993 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02017f16-df60-4b45-844e-d767fef4ff7d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ct98n\" (UID: \"02017f16-df60-4b45-844e-d767fef4ff7d\") " pod="kube-system/cilium-operator-6c4d7847fc-ct98n" Sep 4 15:45:32.609159 kubelet[2660]: I0904 15:45:32.609136 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9nhr\" (UniqueName: \"kubernetes.io/projected/02017f16-df60-4b45-844e-d767fef4ff7d-kube-api-access-t9nhr\") pod \"cilium-operator-6c4d7847fc-ct98n\" (UID: \"02017f16-df60-4b45-844e-d767fef4ff7d\") " pod="kube-system/cilium-operator-6c4d7847fc-ct98n" Sep 4 15:45:32.734859 kubelet[2660]: E0904 15:45:32.734448 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.735065 containerd[1513]: time="2025-09-04T15:45:32.735030453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwkmt,Uid:91176b7b-39d8-4ac1-be7d-d9126dc3708b,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:32.739098 kubelet[2660]: E0904 15:45:32.739070 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.739916 containerd[1513]: time="2025-09-04T15:45:32.739847739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qpndf,Uid:27651188-3b4b-4eb2-8466-ba9fb7517b90,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:32.754350 containerd[1513]: time="2025-09-04T15:45:32.754307517Z" level=info msg="connecting to shim 6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f" address="unix:///run/containerd/s/7bc5e26fc0ca21fc285df7328bacbc17c5c9cf1c43426a01bb79ec34413a3375" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:32.762752 containerd[1513]: time="2025-09-04T15:45:32.762693647Z" level=info msg="connecting to shim 06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:32.780919 systemd[1]: Started cri-containerd-6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f.scope - libcontainer container 6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f. Sep 4 15:45:32.783405 systemd[1]: Started cri-containerd-06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8.scope - libcontainer container 06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8. Sep 4 15:45:32.808953 kubelet[2660]: E0904 15:45:32.808926 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.809468 containerd[1513]: time="2025-09-04T15:45:32.809436105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ct98n,Uid:02017f16-df60-4b45-844e-d767fef4ff7d,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:32.810154 containerd[1513]: time="2025-09-04T15:45:32.810074746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwkmt,Uid:91176b7b-39d8-4ac1-be7d-d9126dc3708b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f\"" Sep 4 15:45:32.810966 containerd[1513]: time="2025-09-04T15:45:32.810932307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qpndf,Uid:27651188-3b4b-4eb2-8466-ba9fb7517b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\"" Sep 4 15:45:32.812187 kubelet[2660]: E0904 15:45:32.812164 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.812404 kubelet[2660]: E0904 15:45:32.812386 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.813987 containerd[1513]: time="2025-09-04T15:45:32.813954271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 15:45:32.817570 containerd[1513]: time="2025-09-04T15:45:32.817544435Z" level=info msg="CreateContainer within sandbox \"6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 15:45:32.829780 containerd[1513]: time="2025-09-04T15:45:32.829520530Z" level=info msg="Container 5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:32.832676 containerd[1513]: time="2025-09-04T15:45:32.832637014Z" level=info msg="connecting to shim acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11" address="unix:///run/containerd/s/18b3a3e0b61da2c240c1907861bb10d9beb28f8376094bf40bc4c1e40835f964" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:32.836552 containerd[1513]: time="2025-09-04T15:45:32.836518419Z" level=info msg="CreateContainer within sandbox \"6b5282e5db89666611679e6cb6875cad13e1874af1393ff70f4114119d26b57f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6\"" Sep 4 15:45:32.837928 containerd[1513]: time="2025-09-04T15:45:32.837525540Z" level=info msg="StartContainer for \"5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6\"" Sep 4 15:45:32.840638 containerd[1513]: time="2025-09-04T15:45:32.840606904Z" level=info msg="connecting to shim 5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6" address="unix:///run/containerd/s/7bc5e26fc0ca21fc285df7328bacbc17c5c9cf1c43426a01bb79ec34413a3375" protocol=ttrpc version=3 Sep 4 15:45:32.855900 systemd[1]: Started cri-containerd-acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11.scope - libcontainer container acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11. Sep 4 15:45:32.858978 systemd[1]: Started cri-containerd-5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6.scope - libcontainer container 5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6. Sep 4 15:45:32.899319 kubelet[2660]: E0904 15:45:32.898105 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:32.905031 containerd[1513]: time="2025-09-04T15:45:32.904985544Z" level=info msg="StartContainer for \"5362719c4054ba20c7eb67feaa64c7f089554a78051d03973da71268750472a6\" returns successfully" Sep 4 15:45:32.905912 containerd[1513]: time="2025-09-04T15:45:32.905875625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ct98n,Uid:02017f16-df60-4b45-844e-d767fef4ff7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\"" Sep 4 15:45:32.907485 kubelet[2660]: E0904 15:45:32.907460 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:33.320348 kubelet[2660]: E0904 15:45:33.320309 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:33.900205 kubelet[2660]: E0904 15:45:33.900089 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:33.900805 kubelet[2660]: E0904 15:45:33.900649 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:33.909840 kubelet[2660]: I0904 15:45:33.909796 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rwkmt" podStartSLOduration=1.909782639 podStartE2EDuration="1.909782639s" podCreationTimestamp="2025-09-04 15:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:33.909736479 +0000 UTC m=+8.128538656" watchObservedRunningTime="2025-09-04 15:45:33.909782639 +0000 UTC m=+8.128584816" Sep 4 15:45:34.901995 kubelet[2660]: E0904 15:45:34.901899 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:38.601336 kubelet[2660]: E0904 15:45:38.601302 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:38.908656 kubelet[2660]: E0904 15:45:38.908553 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:41.263690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714576712.mount: Deactivated successfully. Sep 4 15:45:42.052843 update_engine[1496]: I20250904 15:45:42.052778 1496 update_attempter.cc:509] Updating boot flags... Sep 4 15:45:42.786774 containerd[1513]: time="2025-09-04T15:45:42.786717489Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:42.787201 containerd[1513]: time="2025-09-04T15:45:42.787178889Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 15:45:42.788094 containerd[1513]: time="2025-09-04T15:45:42.788067690Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:42.789648 containerd[1513]: time="2025-09-04T15:45:42.789613891Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.97562354s" Sep 4 15:45:42.789687 containerd[1513]: time="2025-09-04T15:45:42.789649731Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 15:45:42.795050 containerd[1513]: time="2025-09-04T15:45:42.795006774Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 15:45:42.805446 containerd[1513]: time="2025-09-04T15:45:42.805402541Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 15:45:42.811418 containerd[1513]: time="2025-09-04T15:45:42.811369025Z" level=info msg="Container 00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:42.814662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467346190.mount: Deactivated successfully. Sep 4 15:45:42.825185 containerd[1513]: time="2025-09-04T15:45:42.825136114Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\"" Sep 4 15:45:42.825766 containerd[1513]: time="2025-09-04T15:45:42.825702914Z" level=info msg="StartContainer for \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\"" Sep 4 15:45:42.827100 containerd[1513]: time="2025-09-04T15:45:42.827048075Z" level=info msg="connecting to shim 00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" protocol=ttrpc version=3 Sep 4 15:45:42.869942 systemd[1]: Started cri-containerd-00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58.scope - libcontainer container 00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58. Sep 4 15:45:42.924993 containerd[1513]: time="2025-09-04T15:45:42.924957619Z" level=info msg="StartContainer for \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" returns successfully" Sep 4 15:45:42.938895 systemd[1]: cri-containerd-00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58.scope: Deactivated successfully. Sep 4 15:45:42.964692 containerd[1513]: time="2025-09-04T15:45:42.964639404Z" level=info msg="received exit event container_id:\"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" id:\"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" pid:3103 exited_at:{seconds:1757000742 nanos:960246722}" Sep 4 15:45:42.964919 containerd[1513]: time="2025-09-04T15:45:42.964747405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" id:\"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" pid:3103 exited_at:{seconds:1757000742 nanos:960246722}" Sep 4 15:45:42.997699 kubelet[2660]: E0904 15:45:42.996816 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:43.004944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58-rootfs.mount: Deactivated successfully. Sep 4 15:45:44.001428 kubelet[2660]: E0904 15:45:44.001304 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:44.008774 containerd[1513]: time="2025-09-04T15:45:44.008700322Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 15:45:44.020352 containerd[1513]: time="2025-09-04T15:45:44.019759609Z" level=info msg="Container e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:44.023706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423982865.mount: Deactivated successfully. Sep 4 15:45:44.027495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207273962.mount: Deactivated successfully. Sep 4 15:45:44.029272 containerd[1513]: time="2025-09-04T15:45:44.029236934Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\"" Sep 4 15:45:44.029927 containerd[1513]: time="2025-09-04T15:45:44.029896894Z" level=info msg="StartContainer for \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\"" Sep 4 15:45:44.031860 containerd[1513]: time="2025-09-04T15:45:44.031819575Z" level=info msg="connecting to shim e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" protocol=ttrpc version=3 Sep 4 15:45:44.056915 systemd[1]: Started cri-containerd-e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08.scope - libcontainer container e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08. Sep 4 15:45:44.084070 containerd[1513]: time="2025-09-04T15:45:44.084009685Z" level=info msg="StartContainer for \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" returns successfully" Sep 4 15:45:44.097581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 15:45:44.098104 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:45:44.098306 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:45:44.100173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:45:44.101763 systemd[1]: cri-containerd-e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08.scope: Deactivated successfully. Sep 4 15:45:44.108366 containerd[1513]: time="2025-09-04T15:45:44.108328939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" id:\"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" pid:3157 exited_at:{seconds:1757000744 nanos:107979059}" Sep 4 15:45:44.108680 containerd[1513]: time="2025-09-04T15:45:44.108657499Z" level=info msg="received exit event container_id:\"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" id:\"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" pid:3157 exited_at:{seconds:1757000744 nanos:107979059}" Sep 4 15:45:44.125268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:45:44.840840 containerd[1513]: time="2025-09-04T15:45:44.840783438Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:44.854436 containerd[1513]: time="2025-09-04T15:45:44.854372926Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 15:45:44.868334 containerd[1513]: time="2025-09-04T15:45:44.868229814Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:45:44.869759 containerd[1513]: time="2025-09-04T15:45:44.869626254Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.07459132s" Sep 4 15:45:44.869759 containerd[1513]: time="2025-09-04T15:45:44.869661134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 15:45:44.881171 containerd[1513]: time="2025-09-04T15:45:44.881072541Z" level=info msg="CreateContainer within sandbox \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 15:45:44.914554 containerd[1513]: time="2025-09-04T15:45:44.913964400Z" level=info msg="Container eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:44.919066 containerd[1513]: time="2025-09-04T15:45:44.919027643Z" level=info msg="CreateContainer within sandbox \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\"" Sep 4 15:45:44.919857 containerd[1513]: time="2025-09-04T15:45:44.919831003Z" level=info msg="StartContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\"" Sep 4 15:45:44.920950 containerd[1513]: time="2025-09-04T15:45:44.920925764Z" level=info msg="connecting to shim eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70" address="unix:///run/containerd/s/18b3a3e0b61da2c240c1907861bb10d9beb28f8376094bf40bc4c1e40835f964" protocol=ttrpc version=3 Sep 4 15:45:44.942965 systemd[1]: Started cri-containerd-eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70.scope - libcontainer container eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70. Sep 4 15:45:44.966089 containerd[1513]: time="2025-09-04T15:45:44.966053189Z" level=info msg="StartContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" returns successfully" Sep 4 15:45:45.005712 kubelet[2660]: E0904 15:45:45.005678 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:45.007008 kubelet[2660]: E0904 15:45:45.006937 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:45.013719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08-rootfs.mount: Deactivated successfully. Sep 4 15:45:45.037109 containerd[1513]: time="2025-09-04T15:45:45.037068069Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 15:45:45.043776 kubelet[2660]: I0904 15:45:45.043336 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ct98n" podStartSLOduration=1.081960926 podStartE2EDuration="13.043320552s" podCreationTimestamp="2025-09-04 15:45:32 +0000 UTC" firstStartedPulling="2025-09-04 15:45:32.909014149 +0000 UTC m=+7.127816326" lastFinishedPulling="2025-09-04 15:45:44.870373775 +0000 UTC m=+19.089175952" observedRunningTime="2025-09-04 15:45:45.042966872 +0000 UTC m=+19.261769089" watchObservedRunningTime="2025-09-04 15:45:45.043320552 +0000 UTC m=+19.262122729" Sep 4 15:45:45.055813 containerd[1513]: time="2025-09-04T15:45:45.053230517Z" level=info msg="Container 6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:45.070953 containerd[1513]: time="2025-09-04T15:45:45.070899167Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\"" Sep 4 15:45:45.071832 containerd[1513]: time="2025-09-04T15:45:45.071797887Z" level=info msg="StartContainer for \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\"" Sep 4 15:45:45.077521 containerd[1513]: time="2025-09-04T15:45:45.077482250Z" level=info msg="connecting to shim 6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" protocol=ttrpc version=3 Sep 4 15:45:45.109927 systemd[1]: Started cri-containerd-6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360.scope - libcontainer container 6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360. Sep 4 15:45:45.209898 systemd[1]: cri-containerd-6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360.scope: Deactivated successfully. Sep 4 15:45:45.210792 systemd[1]: cri-containerd-6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360.scope: Consumed 35ms CPU time, 4.4M memory peak, 2.3M read from disk. Sep 4 15:45:45.211598 containerd[1513]: time="2025-09-04T15:45:45.211558082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" id:\"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" pid:3247 exited_at:{seconds:1757000745 nanos:211054042}" Sep 4 15:45:45.234038 containerd[1513]: time="2025-09-04T15:45:45.233898974Z" level=info msg="received exit event container_id:\"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" id:\"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" pid:3247 exited_at:{seconds:1757000745 nanos:211054042}" Sep 4 15:45:45.241148 containerd[1513]: time="2025-09-04T15:45:45.241092098Z" level=info msg="StartContainer for \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" returns successfully" Sep 4 15:45:45.255147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360-rootfs.mount: Deactivated successfully. Sep 4 15:45:46.011448 kubelet[2660]: E0904 15:45:46.011415 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:46.012727 kubelet[2660]: E0904 15:45:46.011580 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:46.017312 containerd[1513]: time="2025-09-04T15:45:46.016927073Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 15:45:46.034132 containerd[1513]: time="2025-09-04T15:45:46.034066002Z" level=info msg="Container 8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:46.035659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681955262.mount: Deactivated successfully. Sep 4 15:45:46.053013 containerd[1513]: time="2025-09-04T15:45:46.052971851Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\"" Sep 4 15:45:46.053517 containerd[1513]: time="2025-09-04T15:45:46.053479492Z" level=info msg="StartContainer for \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\"" Sep 4 15:45:46.054488 containerd[1513]: time="2025-09-04T15:45:46.054409372Z" level=info msg="connecting to shim 8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" protocol=ttrpc version=3 Sep 4 15:45:46.075896 systemd[1]: Started cri-containerd-8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9.scope - libcontainer container 8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9. Sep 4 15:45:46.096666 systemd[1]: cri-containerd-8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9.scope: Deactivated successfully. Sep 4 15:45:46.099958 containerd[1513]: time="2025-09-04T15:45:46.099924155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" id:\"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" pid:3287 exited_at:{seconds:1757000746 nanos:99111195}" Sep 4 15:45:46.100043 containerd[1513]: time="2025-09-04T15:45:46.099985115Z" level=info msg="received exit event container_id:\"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" id:\"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" pid:3287 exited_at:{seconds:1757000746 nanos:99111195}" Sep 4 15:45:46.101059 containerd[1513]: time="2025-09-04T15:45:46.100850915Z" level=info msg="StartContainer for \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" returns successfully" Sep 4 15:45:46.119754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9-rootfs.mount: Deactivated successfully. Sep 4 15:45:47.016962 kubelet[2660]: E0904 15:45:47.016915 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:47.023142 containerd[1513]: time="2025-09-04T15:45:47.023093298Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 15:45:47.033459 containerd[1513]: time="2025-09-04T15:45:47.031270302Z" level=info msg="Container bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:47.041121 containerd[1513]: time="2025-09-04T15:45:47.041080107Z" level=info msg="CreateContainer within sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\"" Sep 4 15:45:47.042897 containerd[1513]: time="2025-09-04T15:45:47.042840627Z" level=info msg="StartContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\"" Sep 4 15:45:47.043912 containerd[1513]: time="2025-09-04T15:45:47.043875468Z" level=info msg="connecting to shim bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3" address="unix:///run/containerd/s/2f0c65192d8faa6affd398ee4050a29da56eb973a93ff90f23728efc891dfd57" protocol=ttrpc version=3 Sep 4 15:45:47.065892 systemd[1]: Started cri-containerd-bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3.scope - libcontainer container bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3. Sep 4 15:45:47.097484 containerd[1513]: time="2025-09-04T15:45:47.097448693Z" level=info msg="StartContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" returns successfully" Sep 4 15:45:47.192183 containerd[1513]: time="2025-09-04T15:45:47.192145178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" id:\"f712b90f802d99ffb4762af1b0ce80a492b05fe275b2edaf18017746ded4b7da\" pid:3355 exited_at:{seconds:1757000747 nanos:190399017}" Sep 4 15:45:47.202383 kubelet[2660]: I0904 15:45:47.202323 2660 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 15:45:47.251123 systemd[1]: Created slice kubepods-burstable-pod3105eacd_2f17_4f7d_b367_fdd55e21cd9e.slice - libcontainer container kubepods-burstable-pod3105eacd_2f17_4f7d_b367_fdd55e21cd9e.slice. Sep 4 15:45:47.258080 systemd[1]: Created slice kubepods-burstable-pod6871766d_488d_4871_88d0_f13b6746923b.slice - libcontainer container kubepods-burstable-pod6871766d_488d_4871_88d0_f13b6746923b.slice. Sep 4 15:45:47.429272 kubelet[2660]: I0904 15:45:47.429168 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6871766d-488d-4871-88d0-f13b6746923b-config-volume\") pod \"coredns-674b8bbfcf-xtfcs\" (UID: \"6871766d-488d-4871-88d0-f13b6746923b\") " pod="kube-system/coredns-674b8bbfcf-xtfcs" Sep 4 15:45:47.429272 kubelet[2660]: I0904 15:45:47.429224 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xv6\" (UniqueName: \"kubernetes.io/projected/3105eacd-2f17-4f7d-b367-fdd55e21cd9e-kube-api-access-76xv6\") pod \"coredns-674b8bbfcf-jj8lw\" (UID: \"3105eacd-2f17-4f7d-b367-fdd55e21cd9e\") " pod="kube-system/coredns-674b8bbfcf-jj8lw" Sep 4 15:45:47.429272 kubelet[2660]: I0904 15:45:47.429252 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3105eacd-2f17-4f7d-b367-fdd55e21cd9e-config-volume\") pod \"coredns-674b8bbfcf-jj8lw\" (UID: \"3105eacd-2f17-4f7d-b367-fdd55e21cd9e\") " pod="kube-system/coredns-674b8bbfcf-jj8lw" Sep 4 15:45:47.429441 kubelet[2660]: I0904 15:45:47.429271 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tx98\" (UniqueName: \"kubernetes.io/projected/6871766d-488d-4871-88d0-f13b6746923b-kube-api-access-6tx98\") pod \"coredns-674b8bbfcf-xtfcs\" (UID: \"6871766d-488d-4871-88d0-f13b6746923b\") " pod="kube-system/coredns-674b8bbfcf-xtfcs" Sep 4 15:45:47.555059 kubelet[2660]: E0904 15:45:47.554949 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:47.556005 containerd[1513]: time="2025-09-04T15:45:47.555958549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jj8lw,Uid:3105eacd-2f17-4f7d-b367-fdd55e21cd9e,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:47.563311 kubelet[2660]: E0904 15:45:47.562824 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:47.564561 containerd[1513]: time="2025-09-04T15:45:47.564224513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xtfcs,Uid:6871766d-488d-4871-88d0-f13b6746923b,Namespace:kube-system,Attempt:0,}" Sep 4 15:45:48.023977 kubelet[2660]: E0904 15:45:48.023945 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:48.040199 kubelet[2660]: I0904 15:45:48.040146 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qpndf" podStartSLOduration=6.058939912 podStartE2EDuration="16.040133016s" podCreationTimestamp="2025-09-04 15:45:32 +0000 UTC" firstStartedPulling="2025-09-04 15:45:32.81362167 +0000 UTC m=+7.032423807" lastFinishedPulling="2025-09-04 15:45:42.794814734 +0000 UTC m=+17.013616911" observedRunningTime="2025-09-04 15:45:48.039774296 +0000 UTC m=+22.258576473" watchObservedRunningTime="2025-09-04 15:45:48.040133016 +0000 UTC m=+22.258935193" Sep 4 15:45:49.025948 kubelet[2660]: E0904 15:45:49.025907 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:49.118014 systemd-networkd[1422]: cilium_host: Link UP Sep 4 15:45:49.118419 systemd-networkd[1422]: cilium_net: Link UP Sep 4 15:45:49.118759 systemd-networkd[1422]: cilium_net: Gained carrier Sep 4 15:45:49.118962 systemd-networkd[1422]: cilium_host: Gained carrier Sep 4 15:45:49.193191 systemd-networkd[1422]: cilium_vxlan: Link UP Sep 4 15:45:49.193198 systemd-networkd[1422]: cilium_vxlan: Gained carrier Sep 4 15:45:49.448802 kernel: NET: Registered PF_ALG protocol family Sep 4 15:45:49.924945 systemd-networkd[1422]: cilium_net: Gained IPv6LL Sep 4 15:45:49.998698 systemd-networkd[1422]: lxc_health: Link UP Sep 4 15:45:49.999346 systemd-networkd[1422]: lxc_health: Gained carrier Sep 4 15:45:50.030638 kubelet[2660]: E0904 15:45:50.030595 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:50.115908 systemd-networkd[1422]: cilium_host: Gained IPv6LL Sep 4 15:45:50.244284 systemd-networkd[1422]: cilium_vxlan: Gained IPv6LL Sep 4 15:45:50.596420 systemd-networkd[1422]: lxcc13b298ece54: Link UP Sep 4 15:45:50.596843 kernel: eth0: renamed from tmpe0c27 Sep 4 15:45:50.597481 systemd-networkd[1422]: lxcc13b298ece54: Gained carrier Sep 4 15:45:50.612214 systemd-networkd[1422]: lxcf778b0e1d9d8: Link UP Sep 4 15:45:50.618860 kernel: eth0: renamed from tmpd9f0c Sep 4 15:45:50.620457 systemd-networkd[1422]: lxcf778b0e1d9d8: Gained carrier Sep 4 15:45:51.029836 kubelet[2660]: E0904 15:45:51.029656 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:51.587914 systemd-networkd[1422]: lxc_health: Gained IPv6LL Sep 4 15:45:52.031737 kubelet[2660]: E0904 15:45:52.031626 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:52.228876 systemd-networkd[1422]: lxcf778b0e1d9d8: Gained IPv6LL Sep 4 15:45:52.420037 systemd-networkd[1422]: lxcc13b298ece54: Gained IPv6LL Sep 4 15:45:53.033789 kubelet[2660]: E0904 15:45:53.033245 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:54.112087 containerd[1513]: time="2025-09-04T15:45:54.112035380Z" level=info msg="connecting to shim e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6" address="unix:///run/containerd/s/35106272b3deb4594866bdacbee9388289ac85c509a941a385aa19eb5cdfeb4f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:54.117162 containerd[1513]: time="2025-09-04T15:45:54.117127102Z" level=info msg="connecting to shim d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f" address="unix:///run/containerd/s/d82d806dc9e35b2d01211ee3e0acfc1d9ab50fe157ce178853b9c4673dc214e8" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:45:54.155917 systemd[1]: Started cri-containerd-d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f.scope - libcontainer container d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f. Sep 4 15:45:54.159189 systemd[1]: Started cri-containerd-e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6.scope - libcontainer container e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6. Sep 4 15:45:54.166871 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:45:54.175034 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:45:54.189267 containerd[1513]: time="2025-09-04T15:45:54.189218083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xtfcs,Uid:6871766d-488d-4871-88d0-f13b6746923b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f\"" Sep 4 15:45:54.190864 kubelet[2660]: E0904 15:45:54.190343 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:54.196669 containerd[1513]: time="2025-09-04T15:45:54.196635126Z" level=info msg="CreateContainer within sandbox \"d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 15:45:54.204972 containerd[1513]: time="2025-09-04T15:45:54.204936848Z" level=info msg="Container c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:54.205554 containerd[1513]: time="2025-09-04T15:45:54.205517408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jj8lw,Uid:3105eacd-2f17-4f7d-b367-fdd55e21cd9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6\"" Sep 4 15:45:54.206268 kubelet[2660]: E0904 15:45:54.206245 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:54.211525 containerd[1513]: time="2025-09-04T15:45:54.211484370Z" level=info msg="CreateContainer within sandbox \"d9f0c48a6cb13ee985acce78629134e5ddc37efacec112197ab7c28958a9786f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5\"" Sep 4 15:45:54.211909 containerd[1513]: time="2025-09-04T15:45:54.211880010Z" level=info msg="StartContainer for \"c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5\"" Sep 4 15:45:54.212513 containerd[1513]: time="2025-09-04T15:45:54.212481810Z" level=info msg="CreateContainer within sandbox \"e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 15:45:54.213635 containerd[1513]: time="2025-09-04T15:45:54.213035891Z" level=info msg="connecting to shim c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5" address="unix:///run/containerd/s/d82d806dc9e35b2d01211ee3e0acfc1d9ab50fe157ce178853b9c4673dc214e8" protocol=ttrpc version=3 Sep 4 15:45:54.220733 containerd[1513]: time="2025-09-04T15:45:54.220702853Z" level=info msg="Container e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:45:54.225882 containerd[1513]: time="2025-09-04T15:45:54.225849934Z" level=info msg="CreateContainer within sandbox \"e0c273d28757f64704da45036b0d3beca3cf0502c67ff57965260e5e078643e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf\"" Sep 4 15:45:54.228395 containerd[1513]: time="2025-09-04T15:45:54.226820015Z" level=info msg="StartContainer for \"e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf\"" Sep 4 15:45:54.228395 containerd[1513]: time="2025-09-04T15:45:54.227574455Z" level=info msg="connecting to shim e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf" address="unix:///run/containerd/s/35106272b3deb4594866bdacbee9388289ac85c509a941a385aa19eb5cdfeb4f" protocol=ttrpc version=3 Sep 4 15:45:54.235382 systemd[1]: Started cri-containerd-c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5.scope - libcontainer container c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5. Sep 4 15:45:54.262897 systemd[1]: Started cri-containerd-e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf.scope - libcontainer container e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf. Sep 4 15:45:54.296264 containerd[1513]: time="2025-09-04T15:45:54.296194555Z" level=info msg="StartContainer for \"c6de9a97ddfcce89e666828adbd375522ebe5c7d3965796de3b92202ca8697c5\" returns successfully" Sep 4 15:45:54.305108 containerd[1513]: time="2025-09-04T15:45:54.305020638Z" level=info msg="StartContainer for \"e4539dc9950c5880507474a18f6ac3cb202b7fb20d83a33fc433a17a82568eaf\" returns successfully" Sep 4 15:45:54.374999 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Sep 4 15:45:54.439650 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:54.440876 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:54.445563 systemd-logind[1492]: New session 8 of user core. Sep 4 15:45:54.463972 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 15:45:54.583227 sshd[4013]: Connection closed by 10.0.0.1 port 36266 Sep 4 15:45:54.583534 sshd-session[4009]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:54.587804 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Sep 4 15:45:54.588004 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:36266.service: Deactivated successfully. Sep 4 15:45:54.589615 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 15:45:54.591265 systemd-logind[1492]: Removed session 8. Sep 4 15:45:55.052901 kubelet[2660]: E0904 15:45:55.052874 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:55.053087 kubelet[2660]: E0904 15:45:55.052932 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:45:55.074979 kubelet[2660]: I0904 15:45:55.074919 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xtfcs" podStartSLOduration=23.074899468 podStartE2EDuration="23.074899468s" podCreationTimestamp="2025-09-04 15:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:55.074550227 +0000 UTC m=+29.293352444" watchObservedRunningTime="2025-09-04 15:45:55.074899468 +0000 UTC m=+29.293701645" Sep 4 15:45:55.087574 kubelet[2660]: I0904 15:45:55.086034 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jj8lw" podStartSLOduration=23.086018991 podStartE2EDuration="23.086018991s" podCreationTimestamp="2025-09-04 15:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:45:55.08535167 +0000 UTC m=+29.304153847" watchObservedRunningTime="2025-09-04 15:45:55.086018991 +0000 UTC m=+29.304821168" Sep 4 15:45:59.599620 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:36270.service - OpenSSH per-connection server daemon (10.0.0.1:36270). Sep 4 15:45:59.654319 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 36270 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:45:59.655625 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:45:59.660281 systemd-logind[1492]: New session 9 of user core. Sep 4 15:45:59.667909 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 15:45:59.795780 sshd[4037]: Connection closed by 10.0.0.1 port 36270 Sep 4 15:45:59.795722 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Sep 4 15:45:59.800606 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:36270.service: Deactivated successfully. Sep 4 15:45:59.803907 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 15:45:59.806516 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Sep 4 15:45:59.808250 systemd-logind[1492]: Removed session 9. Sep 4 15:46:04.814932 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:45500.service - OpenSSH per-connection server daemon (10.0.0.1:45500). Sep 4 15:46:04.875093 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 45500 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:04.876511 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:04.880505 systemd-logind[1492]: New session 10 of user core. Sep 4 15:46:04.889921 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 15:46:05.001796 sshd[4057]: Connection closed by 10.0.0.1 port 45500 Sep 4 15:46:05.002293 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:05.006157 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Sep 4 15:46:05.006410 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:45500.service: Deactivated successfully. Sep 4 15:46:05.008019 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 15:46:05.009634 systemd-logind[1492]: Removed session 10. Sep 4 15:46:05.051727 kubelet[2660]: E0904 15:46:05.051127 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:05.052407 kubelet[2660]: E0904 15:46:05.052165 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:05.066983 kubelet[2660]: E0904 15:46:05.066795 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:05.066983 kubelet[2660]: E0904 15:46:05.066934 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:10.013994 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:48224.service - OpenSSH per-connection server daemon (10.0.0.1:48224). Sep 4 15:46:10.072505 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48224 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:10.073568 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:10.077676 systemd-logind[1492]: New session 11 of user core. Sep 4 15:46:10.081866 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 15:46:10.191207 sshd[4082]: Connection closed by 10.0.0.1 port 48224 Sep 4 15:46:10.191930 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:10.202813 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:48224.service: Deactivated successfully. Sep 4 15:46:10.204303 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 15:46:10.205027 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Sep 4 15:46:10.207515 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Sep 4 15:46:10.208138 systemd-logind[1492]: Removed session 11. Sep 4 15:46:10.265190 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:10.266408 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:10.270164 systemd-logind[1492]: New session 12 of user core. Sep 4 15:46:10.279908 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 15:46:10.424131 sshd[4100]: Connection closed by 10.0.0.1 port 48228 Sep 4 15:46:10.424573 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:10.439096 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:48228.service: Deactivated successfully. Sep 4 15:46:10.441457 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 15:46:10.442808 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Sep 4 15:46:10.447991 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:48240.service - OpenSSH per-connection server daemon (10.0.0.1:48240). Sep 4 15:46:10.449870 systemd-logind[1492]: Removed session 12. Sep 4 15:46:10.517315 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 48240 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:10.518834 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:10.523494 systemd-logind[1492]: New session 13 of user core. Sep 4 15:46:10.532890 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 15:46:10.645806 sshd[4114]: Connection closed by 10.0.0.1 port 48240 Sep 4 15:46:10.645856 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:10.649596 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:48240.service: Deactivated successfully. Sep 4 15:46:10.652315 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 15:46:10.653073 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Sep 4 15:46:10.653970 systemd-logind[1492]: Removed session 13. Sep 4 15:46:15.660786 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:48250.service - OpenSSH per-connection server daemon (10.0.0.1:48250). Sep 4 15:46:15.722243 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 48250 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:15.723277 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:15.726833 systemd-logind[1492]: New session 14 of user core. Sep 4 15:46:15.740890 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 15:46:15.852340 sshd[4130]: Connection closed by 10.0.0.1 port 48250 Sep 4 15:46:15.852864 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:15.856839 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:48250.service: Deactivated successfully. Sep 4 15:46:15.860177 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 15:46:15.860900 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Sep 4 15:46:15.863017 systemd-logind[1492]: Removed session 14. Sep 4 15:46:20.875083 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:40552.service - OpenSSH per-connection server daemon (10.0.0.1:40552). Sep 4 15:46:20.940295 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:20.941615 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:20.945800 systemd-logind[1492]: New session 15 of user core. Sep 4 15:46:20.959915 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 15:46:21.071933 sshd[4146]: Connection closed by 10.0.0.1 port 40552 Sep 4 15:46:21.072266 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:21.076417 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:40552.service: Deactivated successfully. Sep 4 15:46:21.079154 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 15:46:21.079747 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Sep 4 15:46:21.080615 systemd-logind[1492]: Removed session 15. Sep 4 15:46:26.090209 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:40560.service - OpenSSH per-connection server daemon (10.0.0.1:40560). Sep 4 15:46:26.153815 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 40560 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:26.155023 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:26.158703 systemd-logind[1492]: New session 16 of user core. Sep 4 15:46:26.167890 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 15:46:26.276469 sshd[4166]: Connection closed by 10.0.0.1 port 40560 Sep 4 15:46:26.276315 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:26.287817 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:40560.service: Deactivated successfully. Sep 4 15:46:26.289286 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 15:46:26.291228 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Sep 4 15:46:26.293433 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:40564.service - OpenSSH per-connection server daemon (10.0.0.1:40564). Sep 4 15:46:26.294374 systemd-logind[1492]: Removed session 16. Sep 4 15:46:26.352803 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 40564 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:26.353957 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:26.357973 systemd-logind[1492]: New session 17 of user core. Sep 4 15:46:26.368899 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 15:46:26.545521 sshd[4182]: Connection closed by 10.0.0.1 port 40564 Sep 4 15:46:26.547060 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:26.555907 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:40564.service: Deactivated successfully. Sep 4 15:46:26.559014 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 15:46:26.559972 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Sep 4 15:46:26.562789 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:40574.service - OpenSSH per-connection server daemon (10.0.0.1:40574). Sep 4 15:46:26.563546 systemd-logind[1492]: Removed session 17. Sep 4 15:46:26.620062 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 40574 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:26.621326 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:26.626015 systemd-logind[1492]: New session 18 of user core. Sep 4 15:46:26.636912 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 15:46:27.226543 sshd[4197]: Connection closed by 10.0.0.1 port 40574 Sep 4 15:46:27.226959 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:27.239695 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:40574.service: Deactivated successfully. Sep 4 15:46:27.245536 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 15:46:27.249706 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Sep 4 15:46:27.255100 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:40588.service - OpenSSH per-connection server daemon (10.0.0.1:40588). Sep 4 15:46:27.255825 systemd-logind[1492]: Removed session 18. Sep 4 15:46:27.313508 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 40588 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:27.315224 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:27.319524 systemd-logind[1492]: New session 19 of user core. Sep 4 15:46:27.330912 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 15:46:27.617621 sshd[4220]: Connection closed by 10.0.0.1 port 40588 Sep 4 15:46:27.618121 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:27.628929 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:40588.service: Deactivated successfully. Sep 4 15:46:27.631786 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 15:46:27.633844 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Sep 4 15:46:27.638165 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:40590.service - OpenSSH per-connection server daemon (10.0.0.1:40590). Sep 4 15:46:27.640616 systemd-logind[1492]: Removed session 19. Sep 4 15:46:27.694117 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 40590 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:27.696030 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:27.700086 systemd-logind[1492]: New session 20 of user core. Sep 4 15:46:27.708896 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 15:46:27.820801 sshd[4235]: Connection closed by 10.0.0.1 port 40590 Sep 4 15:46:27.820843 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:27.825095 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:40590.service: Deactivated successfully. Sep 4 15:46:27.828276 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 15:46:27.829124 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Sep 4 15:46:27.830306 systemd-logind[1492]: Removed session 20. Sep 4 15:46:32.831929 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:35114.service - OpenSSH per-connection server daemon (10.0.0.1:35114). Sep 4 15:46:32.893759 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 35114 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:32.895103 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:32.898669 systemd-logind[1492]: New session 21 of user core. Sep 4 15:46:32.904901 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 15:46:33.014630 sshd[4254]: Connection closed by 10.0.0.1 port 35114 Sep 4 15:46:33.015930 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:33.019658 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:35114.service: Deactivated successfully. Sep 4 15:46:33.023139 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 15:46:33.024545 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Sep 4 15:46:33.026440 systemd-logind[1492]: Removed session 21. Sep 4 15:46:38.033616 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:35120.service - OpenSSH per-connection server daemon (10.0.0.1:35120). Sep 4 15:46:38.104722 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 35120 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:38.106021 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:38.110656 systemd-logind[1492]: New session 22 of user core. Sep 4 15:46:38.121922 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 15:46:38.242373 sshd[4272]: Connection closed by 10.0.0.1 port 35120 Sep 4 15:46:38.242881 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:38.252954 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:35120.service: Deactivated successfully. Sep 4 15:46:38.254619 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 15:46:38.255326 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Sep 4 15:46:38.257687 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:35126.service - OpenSSH per-connection server daemon (10.0.0.1:35126). Sep 4 15:46:38.258191 systemd-logind[1492]: Removed session 22. Sep 4 15:46:38.333911 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 35126 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:38.335375 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:38.340574 systemd-logind[1492]: New session 23 of user core. Sep 4 15:46:38.352974 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 15:46:40.436374 containerd[1513]: time="2025-09-04T15:46:40.435067078Z" level=info msg="StopContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" with timeout 30 (s)" Sep 4 15:46:40.439771 containerd[1513]: time="2025-09-04T15:46:40.439554897Z" level=info msg="Stop container \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" with signal terminated" Sep 4 15:46:40.453207 systemd[1]: cri-containerd-eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70.scope: Deactivated successfully. Sep 4 15:46:40.457719 containerd[1513]: time="2025-09-04T15:46:40.457669492Z" level=info msg="received exit event container_id:\"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" id:\"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" pid:3213 exited_at:{seconds:1757000800 nanos:457407891}" Sep 4 15:46:40.458939 containerd[1513]: time="2025-09-04T15:46:40.458907297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" id:\"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" pid:3213 exited_at:{seconds:1757000800 nanos:457407891}" Sep 4 15:46:40.482342 containerd[1513]: time="2025-09-04T15:46:40.482278954Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 15:46:40.489080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70-rootfs.mount: Deactivated successfully. Sep 4 15:46:40.490041 containerd[1513]: time="2025-09-04T15:46:40.489817706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" id:\"b0f57d7c4a4f57f4e731ef7ddf6bb59174653d06a31bfb2d9147f03d5938cc3a\" pid:4317 exited_at:{seconds:1757000800 nanos:488528340}" Sep 4 15:46:40.495044 containerd[1513]: time="2025-09-04T15:46:40.494912887Z" level=info msg="StopContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" with timeout 2 (s)" Sep 4 15:46:40.495493 containerd[1513]: time="2025-09-04T15:46:40.495458929Z" level=info msg="Stop container \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" with signal terminated" Sep 4 15:46:40.510064 systemd-networkd[1422]: lxc_health: Link DOWN Sep 4 15:46:40.510071 systemd-networkd[1422]: lxc_health: Lost carrier Sep 4 15:46:40.512787 containerd[1513]: time="2025-09-04T15:46:40.512675761Z" level=info msg="StopContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" returns successfully" Sep 4 15:46:40.516467 containerd[1513]: time="2025-09-04T15:46:40.516324616Z" level=info msg="StopPodSandbox for \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\"" Sep 4 15:46:40.526999 containerd[1513]: time="2025-09-04T15:46:40.526939620Z" level=info msg="Container to stop \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.528186 systemd[1]: cri-containerd-bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3.scope: Deactivated successfully. Sep 4 15:46:40.528878 systemd[1]: cri-containerd-bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3.scope: Consumed 6.078s CPU time, 121.6M memory peak, 136K read from disk, 12.9M written to disk. Sep 4 15:46:40.529524 containerd[1513]: time="2025-09-04T15:46:40.529414870Z" level=info msg="received exit event container_id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" pid:3324 exited_at:{seconds:1757000800 nanos:529143069}" Sep 4 15:46:40.529698 containerd[1513]: time="2025-09-04T15:46:40.529497871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" id:\"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" pid:3324 exited_at:{seconds:1757000800 nanos:529143069}" Sep 4 15:46:40.535313 systemd[1]: cri-containerd-acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11.scope: Deactivated successfully. Sep 4 15:46:40.538066 containerd[1513]: time="2025-09-04T15:46:40.538034746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" id:\"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" pid:2882 exit_status:137 exited_at:{seconds:1757000800 nanos:537677825}" Sep 4 15:46:40.551424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3-rootfs.mount: Deactivated successfully. Sep 4 15:46:40.566555 containerd[1513]: time="2025-09-04T15:46:40.566364304Z" level=info msg="StopContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" returns successfully" Sep 4 15:46:40.567550 containerd[1513]: time="2025-09-04T15:46:40.567515949Z" level=info msg="StopPodSandbox for \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\"" Sep 4 15:46:40.572115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11-rootfs.mount: Deactivated successfully. Sep 4 15:46:40.579204 containerd[1513]: time="2025-09-04T15:46:40.567817670Z" level=info msg="Container to stop \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.579204 containerd[1513]: time="2025-09-04T15:46:40.579038156Z" level=info msg="Container to stop \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.579204 containerd[1513]: time="2025-09-04T15:46:40.579053677Z" level=info msg="Container to stop \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.579204 containerd[1513]: time="2025-09-04T15:46:40.579064597Z" level=info msg="Container to stop \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.579204 containerd[1513]: time="2025-09-04T15:46:40.579080517Z" level=info msg="Container to stop \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 15:46:40.579485 containerd[1513]: time="2025-09-04T15:46:40.576694547Z" level=info msg="shim disconnected" id=acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11 namespace=k8s.io Sep 4 15:46:40.579485 containerd[1513]: time="2025-09-04T15:46:40.579433598Z" level=warning msg="cleaning up after shim disconnected" id=acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11 namespace=k8s.io Sep 4 15:46:40.579485 containerd[1513]: time="2025-09-04T15:46:40.579460198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 15:46:40.587383 systemd[1]: cri-containerd-06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8.scope: Deactivated successfully. Sep 4 15:46:40.604115 containerd[1513]: time="2025-09-04T15:46:40.603985140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" id:\"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" pid:2818 exit_status:137 exited_at:{seconds:1757000800 nanos:592949534}" Sep 4 15:46:40.605562 containerd[1513]: time="2025-09-04T15:46:40.605264305Z" level=info msg="TearDown network for sandbox \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" successfully" Sep 4 15:46:40.605562 containerd[1513]: time="2025-09-04T15:46:40.605554467Z" level=info msg="StopPodSandbox for \"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" returns successfully" Sep 4 15:46:40.606579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11-shm.mount: Deactivated successfully. Sep 4 15:46:40.615520 containerd[1513]: time="2025-09-04T15:46:40.615210907Z" level=info msg="received exit event sandbox_id:\"acd14ed2c605d77ed7369fb6a1bc5faed5c9a087436c7bd33eaaa63840c2af11\" exit_status:137 exited_at:{seconds:1757000800 nanos:537677825}" Sep 4 15:46:40.619039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8-rootfs.mount: Deactivated successfully. Sep 4 15:46:40.653391 containerd[1513]: time="2025-09-04T15:46:40.653349385Z" level=info msg="shim disconnected" id=06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8 namespace=k8s.io Sep 4 15:46:40.653575 containerd[1513]: time="2025-09-04T15:46:40.653381985Z" level=warning msg="cleaning up after shim disconnected" id=06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8 namespace=k8s.io Sep 4 15:46:40.653575 containerd[1513]: time="2025-09-04T15:46:40.653411345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 15:46:40.653668 containerd[1513]: time="2025-09-04T15:46:40.653579506Z" level=info msg="received exit event sandbox_id:\"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" exit_status:137 exited_at:{seconds:1757000800 nanos:592949534}" Sep 4 15:46:40.653788 containerd[1513]: time="2025-09-04T15:46:40.653763347Z" level=info msg="TearDown network for sandbox \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" successfully" Sep 4 15:46:40.653840 containerd[1513]: time="2025-09-04T15:46:40.653787747Z" level=info msg="StopPodSandbox for \"06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8\" returns successfully" Sep 4 15:46:40.744639 kubelet[2660]: I0904 15:46:40.744041 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-run\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.744639 kubelet[2660]: I0904 15:46:40.744090 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-etc-cni-netd\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.744639 kubelet[2660]: I0904 15:46:40.744132 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-xtables-lock\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745054 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02017f16-df60-4b45-844e-d767fef4ff7d-cilium-config-path\") pod \"02017f16-df60-4b45-844e-d767fef4ff7d\" (UID: \"02017f16-df60-4b45-844e-d767fef4ff7d\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745086 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-net\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745115 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9nhr\" (UniqueName: \"kubernetes.io/projected/02017f16-df60-4b45-844e-d767fef4ff7d-kube-api-access-t9nhr\") pod \"02017f16-df60-4b45-844e-d767fef4ff7d\" (UID: \"02017f16-df60-4b45-844e-d767fef4ff7d\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745298 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27651188-3b4b-4eb2-8466-ba9fb7517b90-clustermesh-secrets\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745319 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-lib-modules\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745771 kubelet[2660]: I0904 15:46:40.745448 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-hubble-tls\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745530 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqtw\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-kube-api-access-ncqtw\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745548 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-hostproc\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745565 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-bpf-maps\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745581 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-kernel\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745602 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-config-path\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.745920 kubelet[2660]: I0904 15:46:40.745617 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-cgroup\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.746037 kubelet[2660]: I0904 15:46:40.745633 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cni-path\") pod \"27651188-3b4b-4eb2-8466-ba9fb7517b90\" (UID: \"27651188-3b4b-4eb2-8466-ba9fb7517b90\") " Sep 4 15:46:40.746037 kubelet[2660]: I0904 15:46:40.745878 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.746386 kubelet[2660]: I0904 15:46:40.746113 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.747472 kubelet[2660]: I0904 15:46:40.746795 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cni-path" (OuterVolumeSpecName: "cni-path") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749162 kubelet[2660]: I0904 15:46:40.747002 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749162 kubelet[2660]: I0904 15:46:40.747223 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-hostproc" (OuterVolumeSpecName: "hostproc") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749162 kubelet[2660]: I0904 15:46:40.747285 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749162 kubelet[2660]: I0904 15:46:40.747300 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749162 kubelet[2660]: I0904 15:46:40.749004 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749340 kubelet[2660]: I0904 15:46:40.749096 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 15:46:40.749340 kubelet[2660]: I0904 15:46:40.749128 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.749340 kubelet[2660]: I0904 15:46:40.749143 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 15:46:40.750461 kubelet[2660]: I0904 15:46:40.750423 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-kube-api-access-ncqtw" (OuterVolumeSpecName: "kube-api-access-ncqtw") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "kube-api-access-ncqtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 15:46:40.751793 kubelet[2660]: I0904 15:46:40.751433 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27651188-3b4b-4eb2-8466-ba9fb7517b90-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 15:46:40.751793 kubelet[2660]: I0904 15:46:40.751493 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02017f16-df60-4b45-844e-d767fef4ff7d-kube-api-access-t9nhr" (OuterVolumeSpecName: "kube-api-access-t9nhr") pod "02017f16-df60-4b45-844e-d767fef4ff7d" (UID: "02017f16-df60-4b45-844e-d767fef4ff7d"). InnerVolumeSpecName "kube-api-access-t9nhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 15:46:40.755898 kubelet[2660]: I0904 15:46:40.755872 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02017f16-df60-4b45-844e-d767fef4ff7d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02017f16-df60-4b45-844e-d767fef4ff7d" (UID: "02017f16-df60-4b45-844e-d767fef4ff7d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 15:46:40.758241 kubelet[2660]: I0904 15:46:40.758198 2660 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27651188-3b4b-4eb2-8466-ba9fb7517b90" (UID: "27651188-3b4b-4eb2-8466-ba9fb7517b90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 15:46:40.846673 kubelet[2660]: I0904 15:46:40.846621 2660 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846673 kubelet[2660]: I0904 15:46:40.846666 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846673 kubelet[2660]: I0904 15:46:40.846683 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846697 2660 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846712 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846727 2660 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846775 2660 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846792 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02017f16-df60-4b45-844e-d767fef4ff7d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846806 2660 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846821 2660 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t9nhr\" (UniqueName: \"kubernetes.io/projected/02017f16-df60-4b45-844e-d767fef4ff7d-kube-api-access-t9nhr\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.846872 kubelet[2660]: I0904 15:46:40.846836 2660 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27651188-3b4b-4eb2-8466-ba9fb7517b90-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.847030 kubelet[2660]: I0904 15:46:40.846843 2660 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.847030 kubelet[2660]: I0904 15:46:40.846850 2660 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.847030 kubelet[2660]: I0904 15:46:40.846859 2660 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncqtw\" (UniqueName: \"kubernetes.io/projected/27651188-3b4b-4eb2-8466-ba9fb7517b90-kube-api-access-ncqtw\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.847030 kubelet[2660]: I0904 15:46:40.846866 2660 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.847030 kubelet[2660]: I0904 15:46:40.846873 2660 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27651188-3b4b-4eb2-8466-ba9fb7517b90-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 15:46:40.862178 kubelet[2660]: E0904 15:46:40.862148 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:40.914786 kubelet[2660]: E0904 15:46:40.914734 2660 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 15:46:41.137478 systemd[1]: Removed slice kubepods-besteffort-pod02017f16_df60_4b45_844e_d767fef4ff7d.slice - libcontainer container kubepods-besteffort-pod02017f16_df60_4b45_844e_d767fef4ff7d.slice. Sep 4 15:46:41.142584 kubelet[2660]: I0904 15:46:41.142047 2660 scope.go:117] "RemoveContainer" containerID="eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70" Sep 4 15:46:41.145318 systemd[1]: Removed slice kubepods-burstable-pod27651188_3b4b_4eb2_8466_ba9fb7517b90.slice - libcontainer container kubepods-burstable-pod27651188_3b4b_4eb2_8466_ba9fb7517b90.slice. Sep 4 15:46:41.145409 systemd[1]: kubepods-burstable-pod27651188_3b4b_4eb2_8466_ba9fb7517b90.slice: Consumed 6.174s CPU time, 121.9M memory peak, 2.5M read from disk, 12.9M written to disk. Sep 4 15:46:41.146244 containerd[1513]: time="2025-09-04T15:46:41.146200216Z" level=info msg="RemoveContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\"" Sep 4 15:46:41.152840 containerd[1513]: time="2025-09-04T15:46:41.152797243Z" level=info msg="RemoveContainer for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" returns successfully" Sep 4 15:46:41.153782 kubelet[2660]: I0904 15:46:41.153737 2660 scope.go:117] "RemoveContainer" containerID="eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70" Sep 4 15:46:41.154910 containerd[1513]: time="2025-09-04T15:46:41.154865571Z" level=error msg="ContainerStatus for \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\": not found" Sep 4 15:46:41.158556 kubelet[2660]: E0904 15:46:41.158495 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\": not found" containerID="eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70" Sep 4 15:46:41.158635 kubelet[2660]: I0904 15:46:41.158566 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70"} err="failed to get container status \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb0917b9e837ec0671634bea14309c8ca16ef55fe0329f20026a6fe98f842f70\": not found" Sep 4 15:46:41.158635 kubelet[2660]: I0904 15:46:41.158604 2660 scope.go:117] "RemoveContainer" containerID="bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3" Sep 4 15:46:41.160865 containerd[1513]: time="2025-09-04T15:46:41.160834075Z" level=info msg="RemoveContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\"" Sep 4 15:46:41.168085 containerd[1513]: time="2025-09-04T15:46:41.168016785Z" level=info msg="RemoveContainer for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" returns successfully" Sep 4 15:46:41.168415 kubelet[2660]: I0904 15:46:41.168398 2660 scope.go:117] "RemoveContainer" containerID="8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9" Sep 4 15:46:41.170006 containerd[1513]: time="2025-09-04T15:46:41.169958832Z" level=info msg="RemoveContainer for \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\"" Sep 4 15:46:41.173584 containerd[1513]: time="2025-09-04T15:46:41.173545287Z" level=info msg="RemoveContainer for \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" returns successfully" Sep 4 15:46:41.173910 kubelet[2660]: I0904 15:46:41.173768 2660 scope.go:117] "RemoveContainer" containerID="6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360" Sep 4 15:46:41.176102 containerd[1513]: time="2025-09-04T15:46:41.176074257Z" level=info msg="RemoveContainer for \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\"" Sep 4 15:46:41.179624 containerd[1513]: time="2025-09-04T15:46:41.179585191Z" level=info msg="RemoveContainer for \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" returns successfully" Sep 4 15:46:41.179782 kubelet[2660]: I0904 15:46:41.179762 2660 scope.go:117] "RemoveContainer" containerID="e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08" Sep 4 15:46:41.181139 containerd[1513]: time="2025-09-04T15:46:41.181111237Z" level=info msg="RemoveContainer for \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\"" Sep 4 15:46:41.183718 containerd[1513]: time="2025-09-04T15:46:41.183692008Z" level=info msg="RemoveContainer for \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" returns successfully" Sep 4 15:46:41.183993 kubelet[2660]: I0904 15:46:41.183945 2660 scope.go:117] "RemoveContainer" containerID="00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58" Sep 4 15:46:41.186290 containerd[1513]: time="2025-09-04T15:46:41.186253938Z" level=info msg="RemoveContainer for \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\"" Sep 4 15:46:41.188861 containerd[1513]: time="2025-09-04T15:46:41.188834349Z" level=info msg="RemoveContainer for \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" returns successfully" Sep 4 15:46:41.189139 kubelet[2660]: I0904 15:46:41.189019 2660 scope.go:117] "RemoveContainer" containerID="bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3" Sep 4 15:46:41.189383 containerd[1513]: time="2025-09-04T15:46:41.189349471Z" level=error msg="ContainerStatus for \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\": not found" Sep 4 15:46:41.189593 kubelet[2660]: E0904 15:46:41.189545 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\": not found" containerID="bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3" Sep 4 15:46:41.189593 kubelet[2660]: I0904 15:46:41.189586 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3"} err="failed to get container status \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf136d949e449b00738bbe2dccd5d4ba22f46136487bc0c564a11d96906da4f3\": not found" Sep 4 15:46:41.189666 kubelet[2660]: I0904 15:46:41.189606 2660 scope.go:117] "RemoveContainer" containerID="8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9" Sep 4 15:46:41.189791 containerd[1513]: time="2025-09-04T15:46:41.189762792Z" level=error msg="ContainerStatus for \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\": not found" Sep 4 15:46:41.189921 kubelet[2660]: E0904 15:46:41.189879 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\": not found" containerID="8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9" Sep 4 15:46:41.189921 kubelet[2660]: I0904 15:46:41.189893 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9"} err="failed to get container status \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cf69d5ae1f7663a91a10076688beafd53e568ffb89acbf7a33c95d51a88c7a9\": not found" Sep 4 15:46:41.189921 kubelet[2660]: I0904 15:46:41.189903 2660 scope.go:117] "RemoveContainer" containerID="6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360" Sep 4 15:46:41.190099 containerd[1513]: time="2025-09-04T15:46:41.190044514Z" level=error msg="ContainerStatus for \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\": not found" Sep 4 15:46:41.190173 kubelet[2660]: E0904 15:46:41.190146 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\": not found" containerID="6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360" Sep 4 15:46:41.190212 kubelet[2660]: I0904 15:46:41.190173 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360"} err="failed to get container status \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c758fef4519c84887a1ef8a0a8c40d074fc928707782f6774fc1b38f24f1360\": not found" Sep 4 15:46:41.190212 kubelet[2660]: I0904 15:46:41.190189 2660 scope.go:117] "RemoveContainer" containerID="e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08" Sep 4 15:46:41.190350 containerd[1513]: time="2025-09-04T15:46:41.190322235Z" level=error msg="ContainerStatus for \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\": not found" Sep 4 15:46:41.190434 kubelet[2660]: E0904 15:46:41.190418 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\": not found" containerID="e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08" Sep 4 15:46:41.190459 kubelet[2660]: I0904 15:46:41.190433 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08"} err="failed to get container status \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3499443dd351b556d697cb22f796ed90a2ac6868d5f973e26992a679a701a08\": not found" Sep 4 15:46:41.190459 kubelet[2660]: I0904 15:46:41.190443 2660 scope.go:117] "RemoveContainer" containerID="00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58" Sep 4 15:46:41.190598 containerd[1513]: time="2025-09-04T15:46:41.190546516Z" level=error msg="ContainerStatus for \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\": not found" Sep 4 15:46:41.190639 kubelet[2660]: E0904 15:46:41.190614 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\": not found" containerID="00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58" Sep 4 15:46:41.190639 kubelet[2660]: I0904 15:46:41.190628 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58"} err="failed to get container status \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\": rpc error: code = NotFound desc = an error occurred when try to find container \"00f5ff113e28d6b7b0b548103a060d467759ce1f1d251ebe60eadf1e279dba58\": not found" Sep 4 15:46:41.487874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06afc40f343141bf9c43ea456bfaf0a775507f804f43e6915e8b5239560c6fd8-shm.mount: Deactivated successfully. Sep 4 15:46:41.487977 systemd[1]: var-lib-kubelet-pods-02017f16\x2ddf60\x2d4b45\x2d844e\x2dd767fef4ff7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9nhr.mount: Deactivated successfully. Sep 4 15:46:41.488026 systemd[1]: var-lib-kubelet-pods-27651188\x2d3b4b\x2d4eb2\x2d8466\x2dba9fb7517b90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncqtw.mount: Deactivated successfully. Sep 4 15:46:41.488077 systemd[1]: var-lib-kubelet-pods-27651188\x2d3b4b\x2d4eb2\x2d8466\x2dba9fb7517b90-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 15:46:41.488126 systemd[1]: var-lib-kubelet-pods-27651188\x2d3b4b\x2d4eb2\x2d8466\x2dba9fb7517b90-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 15:46:41.863734 kubelet[2660]: I0904 15:46:41.863691 2660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02017f16-df60-4b45-844e-d767fef4ff7d" path="/var/lib/kubelet/pods/02017f16-df60-4b45-844e-d767fef4ff7d/volumes" Sep 4 15:46:41.864102 kubelet[2660]: I0904 15:46:41.864079 2660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27651188-3b4b-4eb2-8466-ba9fb7517b90" path="/var/lib/kubelet/pods/27651188-3b4b-4eb2-8466-ba9fb7517b90/volumes" Sep 4 15:46:42.390306 sshd[4289]: Connection closed by 10.0.0.1 port 35126 Sep 4 15:46:42.391007 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:42.406989 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:35126.service: Deactivated successfully. Sep 4 15:46:42.408433 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 15:46:42.408619 systemd[1]: session-23.scope: Consumed 1.369s CPU time, 24.3M memory peak. Sep 4 15:46:42.409104 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Sep 4 15:46:42.411053 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:45044.service - OpenSSH per-connection server daemon (10.0.0.1:45044). Sep 4 15:46:42.412076 systemd-logind[1492]: Removed session 23. Sep 4 15:46:42.463883 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 45044 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:42.465048 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:42.468899 systemd-logind[1492]: New session 24 of user core. Sep 4 15:46:42.474874 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 15:46:43.829066 sshd[4444]: Connection closed by 10.0.0.1 port 45044 Sep 4 15:46:43.830001 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:43.839585 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:45044.service: Deactivated successfully. Sep 4 15:46:43.842180 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 15:46:43.842395 systemd[1]: session-24.scope: Consumed 1.274s CPU time, 26.4M memory peak. Sep 4 15:46:43.845841 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Sep 4 15:46:43.848029 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:45052.service - OpenSSH per-connection server daemon (10.0.0.1:45052). Sep 4 15:46:43.851054 systemd-logind[1492]: Removed session 24. Sep 4 15:46:43.862762 kubelet[2660]: E0904 15:46:43.862708 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:43.868939 systemd[1]: Created slice kubepods-burstable-podaef866ea_98bc_498d_8a6d_d31117e99b63.slice - libcontainer container kubepods-burstable-podaef866ea_98bc_498d_8a6d_d31117e99b63.slice. Sep 4 15:46:43.918644 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 45052 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:43.919794 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:43.923822 systemd-logind[1492]: New session 25 of user core. Sep 4 15:46:43.931900 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 15:46:43.965951 kubelet[2660]: I0904 15:46:43.965902 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6g5r\" (UniqueName: \"kubernetes.io/projected/aef866ea-98bc-498d-8a6d-d31117e99b63-kube-api-access-x6g5r\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.965951 kubelet[2660]: I0904 15:46:43.965948 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-cilium-run\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.965970 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-hostproc\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.965987 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-xtables-lock\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.966001 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-bpf-maps\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.966020 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aef866ea-98bc-498d-8a6d-d31117e99b63-cilium-ipsec-secrets\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.966034 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-host-proc-sys-kernel\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966076 kubelet[2660]: I0904 15:46:43.966048 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aef866ea-98bc-498d-8a6d-d31117e99b63-hubble-tls\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966062 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-cilium-cgroup\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966076 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-cni-path\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966093 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aef866ea-98bc-498d-8a6d-d31117e99b63-cilium-config-path\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966109 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-etc-cni-netd\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966124 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-lib-modules\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966199 kubelet[2660]: I0904 15:46:43.966140 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aef866ea-98bc-498d-8a6d-d31117e99b63-host-proc-sys-net\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.966324 kubelet[2660]: I0904 15:46:43.966157 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aef866ea-98bc-498d-8a6d-d31117e99b63-clustermesh-secrets\") pod \"cilium-n88c4\" (UID: \"aef866ea-98bc-498d-8a6d-d31117e99b63\") " pod="kube-system/cilium-n88c4" Sep 4 15:46:43.980515 sshd[4460]: Connection closed by 10.0.0.1 port 45052 Sep 4 15:46:43.980957 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:43.994832 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:45052.service: Deactivated successfully. Sep 4 15:46:43.996531 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 15:46:43.998286 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Sep 4 15:46:44.000442 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:45066.service - OpenSSH per-connection server daemon (10.0.0.1:45066). Sep 4 15:46:44.001435 systemd-logind[1492]: Removed session 25. Sep 4 15:46:44.057585 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 45066 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:44.058905 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:44.062797 systemd-logind[1492]: New session 26 of user core. Sep 4 15:46:44.069250 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 15:46:44.191316 kubelet[2660]: E0904 15:46:44.191173 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:44.191919 containerd[1513]: time="2025-09-04T15:46:44.191848541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n88c4,Uid:aef866ea-98bc-498d-8a6d-d31117e99b63,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:44.207979 containerd[1513]: time="2025-09-04T15:46:44.207649240Z" level=info msg="connecting to shim f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:44.232949 systemd[1]: Started cri-containerd-f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e.scope - libcontainer container f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e. Sep 4 15:46:44.256252 containerd[1513]: time="2025-09-04T15:46:44.256202301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n88c4,Uid:aef866ea-98bc-498d-8a6d-d31117e99b63,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\"" Sep 4 15:46:44.257997 kubelet[2660]: E0904 15:46:44.257952 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:44.264778 containerd[1513]: time="2025-09-04T15:46:44.264129450Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 15:46:44.269999 containerd[1513]: time="2025-09-04T15:46:44.269964072Z" level=info msg="Container d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:44.274997 containerd[1513]: time="2025-09-04T15:46:44.274961450Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\"" Sep 4 15:46:44.275848 containerd[1513]: time="2025-09-04T15:46:44.275816854Z" level=info msg="StartContainer for \"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\"" Sep 4 15:46:44.278812 containerd[1513]: time="2025-09-04T15:46:44.278102862Z" level=info msg="connecting to shim d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" protocol=ttrpc version=3 Sep 4 15:46:44.302938 systemd[1]: Started cri-containerd-d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3.scope - libcontainer container d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3. Sep 4 15:46:44.338251 systemd[1]: cri-containerd-d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3.scope: Deactivated successfully. Sep 4 15:46:44.339597 containerd[1513]: time="2025-09-04T15:46:44.339550971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\" id:\"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\" pid:4542 exited_at:{seconds:1757000804 nanos:339152330}" Sep 4 15:46:44.376868 containerd[1513]: time="2025-09-04T15:46:44.376802950Z" level=info msg="received exit event container_id:\"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\" id:\"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\" pid:4542 exited_at:{seconds:1757000804 nanos:339152330}" Sep 4 15:46:44.377941 containerd[1513]: time="2025-09-04T15:46:44.377849354Z" level=info msg="StartContainer for \"d686f91fccc30955d83082dcf39789dc661af3b32c679778921947c63dfa89a3\" returns successfully" Sep 4 15:46:45.149491 kubelet[2660]: E0904 15:46:45.149443 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:45.153713 containerd[1513]: time="2025-09-04T15:46:45.153659388Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 15:46:45.162844 containerd[1513]: time="2025-09-04T15:46:45.162805501Z" level=info msg="Container 132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:45.163189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103751364.mount: Deactivated successfully. Sep 4 15:46:45.171347 containerd[1513]: time="2025-09-04T15:46:45.171304732Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\"" Sep 4 15:46:45.172061 containerd[1513]: time="2025-09-04T15:46:45.172019574Z" level=info msg="StartContainer for \"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\"" Sep 4 15:46:45.173317 containerd[1513]: time="2025-09-04T15:46:45.173217539Z" level=info msg="connecting to shim 132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" protocol=ttrpc version=3 Sep 4 15:46:45.193942 systemd[1]: Started cri-containerd-132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2.scope - libcontainer container 132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2. Sep 4 15:46:45.221444 containerd[1513]: time="2025-09-04T15:46:45.221332593Z" level=info msg="StartContainer for \"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\" returns successfully" Sep 4 15:46:45.225883 systemd[1]: cri-containerd-132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2.scope: Deactivated successfully. Sep 4 15:46:45.227955 containerd[1513]: time="2025-09-04T15:46:45.227915457Z" level=info msg="received exit event container_id:\"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\" id:\"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\" pid:4587 exited_at:{seconds:1757000805 nanos:226073970}" Sep 4 15:46:45.228158 containerd[1513]: time="2025-09-04T15:46:45.228135778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\" id:\"132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2\" pid:4587 exited_at:{seconds:1757000805 nanos:226073970}" Sep 4 15:46:45.245470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-132ee63f823b1169d336220df7da7196b53d5f1423b7b37759279afa083671b2-rootfs.mount: Deactivated successfully. Sep 4 15:46:45.916468 kubelet[2660]: E0904 15:46:45.916428 2660 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 15:46:46.152979 kubelet[2660]: E0904 15:46:46.152933 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:46.157152 containerd[1513]: time="2025-09-04T15:46:46.157096410Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 15:46:46.171811 containerd[1513]: time="2025-09-04T15:46:46.170863378Z" level=info msg="Container fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:46.179886 containerd[1513]: time="2025-09-04T15:46:46.179814010Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\"" Sep 4 15:46:46.180767 containerd[1513]: time="2025-09-04T15:46:46.180683373Z" level=info msg="StartContainer for \"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\"" Sep 4 15:46:46.184642 containerd[1513]: time="2025-09-04T15:46:46.184607867Z" level=info msg="connecting to shim fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" protocol=ttrpc version=3 Sep 4 15:46:46.215005 systemd[1]: Started cri-containerd-fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26.scope - libcontainer container fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26. Sep 4 15:46:46.252456 containerd[1513]: time="2025-09-04T15:46:46.252418226Z" level=info msg="StartContainer for \"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\" returns successfully" Sep 4 15:46:46.254215 systemd[1]: cri-containerd-fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26.scope: Deactivated successfully. Sep 4 15:46:46.258470 containerd[1513]: time="2025-09-04T15:46:46.258429967Z" level=info msg="received exit event container_id:\"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\" id:\"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\" pid:4632 exited_at:{seconds:1757000806 nanos:258178167}" Sep 4 15:46:46.258729 containerd[1513]: time="2025-09-04T15:46:46.258594688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\" id:\"fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26\" pid:4632 exited_at:{seconds:1757000806 nanos:258178167}" Sep 4 15:46:46.277314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe899aee250e368477d9a2e1f8ab3abf2d18cc3e35097738f98decf11964af26-rootfs.mount: Deactivated successfully. Sep 4 15:46:47.158843 kubelet[2660]: E0904 15:46:47.158811 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:47.216185 containerd[1513]: time="2025-09-04T15:46:47.216139847Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 15:46:47.290711 containerd[1513]: time="2025-09-04T15:46:47.289245818Z" level=info msg="Container 2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:47.291970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119117366.mount: Deactivated successfully. Sep 4 15:46:47.300014 containerd[1513]: time="2025-09-04T15:46:47.299966415Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\"" Sep 4 15:46:47.300826 containerd[1513]: time="2025-09-04T15:46:47.300799378Z" level=info msg="StartContainer for \"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\"" Sep 4 15:46:47.302550 containerd[1513]: time="2025-09-04T15:46:47.302452783Z" level=info msg="connecting to shim 2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" protocol=ttrpc version=3 Sep 4 15:46:47.326967 systemd[1]: Started cri-containerd-2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532.scope - libcontainer container 2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532. Sep 4 15:46:47.354724 systemd[1]: cri-containerd-2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532.scope: Deactivated successfully. Sep 4 15:46:47.355561 containerd[1513]: time="2025-09-04T15:46:47.355530486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\" id:\"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\" pid:4671 exited_at:{seconds:1757000807 nanos:355218205}" Sep 4 15:46:47.358542 containerd[1513]: time="2025-09-04T15:46:47.358432336Z" level=info msg="received exit event container_id:\"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\" id:\"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\" pid:4671 exited_at:{seconds:1757000807 nanos:355218205}" Sep 4 15:46:47.365045 containerd[1513]: time="2025-09-04T15:46:47.364925398Z" level=info msg="StartContainer for \"2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532\" returns successfully" Sep 4 15:46:47.377078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2676047eaa4c1a90278f686afec2ce9b1bc0ffe9302fd88d6285e342e2a37532-rootfs.mount: Deactivated successfully. Sep 4 15:46:47.499178 kubelet[2660]: I0904 15:46:47.499045 2660 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T15:46:47Z","lastTransitionTime":"2025-09-04T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 15:46:48.166065 kubelet[2660]: E0904 15:46:48.166029 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:48.174076 containerd[1513]: time="2025-09-04T15:46:48.174026962Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 15:46:48.187365 containerd[1513]: time="2025-09-04T15:46:48.187326006Z" level=info msg="Container b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:48.199391 containerd[1513]: time="2025-09-04T15:46:48.199327966Z" level=info msg="CreateContainer within sandbox \"f40aa7d42ec86e6f11cf3675f0f2510debe07b58e733301794bb9df498128d4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\"" Sep 4 15:46:48.200708 containerd[1513]: time="2025-09-04T15:46:48.200620891Z" level=info msg="StartContainer for \"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\"" Sep 4 15:46:48.201537 containerd[1513]: time="2025-09-04T15:46:48.201494254Z" level=info msg="connecting to shim b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad" address="unix:///run/containerd/s/39afb9c182298b6a26224106da27919775ae333ba66a1e76476656083ef4c47a" protocol=ttrpc version=3 Sep 4 15:46:48.236992 systemd[1]: Started cri-containerd-b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad.scope - libcontainer container b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad. Sep 4 15:46:48.266886 containerd[1513]: time="2025-09-04T15:46:48.266841592Z" level=info msg="StartContainer for \"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" returns successfully" Sep 4 15:46:48.322412 containerd[1513]: time="2025-09-04T15:46:48.322361218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" id:\"fbeaafa77d55a545a602e35f3d5646746e686a687eba7380c62b29847e5318fd\" pid:4739 exited_at:{seconds:1757000808 nanos:320632212}" Sep 4 15:46:48.524851 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 15:46:49.170991 kubelet[2660]: E0904 15:46:49.170880 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:49.187685 kubelet[2660]: I0904 15:46:49.187606 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n88c4" podStartSLOduration=6.187587336 podStartE2EDuration="6.187587336s" podCreationTimestamp="2025-09-04 15:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 15:46:49.186406852 +0000 UTC m=+83.405209029" watchObservedRunningTime="2025-09-04 15:46:49.187587336 +0000 UTC m=+83.406389513" Sep 4 15:46:50.191994 kubelet[2660]: E0904 15:46:50.191952 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:50.412681 containerd[1513]: time="2025-09-04T15:46:50.412623812Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" id:\"8e7e40876dc8260efb079fa7f2e91bfa026374390f0a1fa6cef5db5cd09ea26f\" pid:4906 exit_status:1 exited_at:{seconds:1757000810 nanos:411056447}" Sep 4 15:46:51.382989 systemd-networkd[1422]: lxc_health: Link UP Sep 4 15:46:51.384016 systemd-networkd[1422]: lxc_health: Gained carrier Sep 4 15:46:52.195596 kubelet[2660]: E0904 15:46:52.195532 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:52.591531 containerd[1513]: time="2025-09-04T15:46:52.591486427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" id:\"6cc6fc28ca1f3025649cd1c6de48d8ba77376367caf47851bd24a7a96cbb869e\" pid:5276 exited_at:{seconds:1757000812 nanos:591065106}" Sep 4 15:46:53.182321 kubelet[2660]: E0904 15:46:53.181399 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:53.411954 systemd-networkd[1422]: lxc_health: Gained IPv6LL Sep 4 15:46:54.183101 kubelet[2660]: E0904 15:46:54.183068 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:54.704088 containerd[1513]: time="2025-09-04T15:46:54.704048805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" id:\"aa12a1bbb19d562298e4c9bd81d5a4e218889635be0bce29b54da72f062fc773\" pid:5306 exited_at:{seconds:1757000814 nanos:703554843}" Sep 4 15:46:55.862427 kubelet[2660]: E0904 15:46:55.861943 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:56.805323 containerd[1513]: time="2025-09-04T15:46:56.805278427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b41a406821fa9d6e598cae046d4e8e58c2394bb786d3472b04230ca2623f50ad\" id:\"58fbe713267475623b48c678e8df617e068de5920927b93f9fa66513c9029d21\" pid:5336 exited_at:{seconds:1757000816 nanos:804665225}" Sep 4 15:46:56.812587 sshd[4475]: Connection closed by 10.0.0.1 port 45066 Sep 4 15:46:56.813055 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:56.816702 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:45066.service: Deactivated successfully. Sep 4 15:46:56.818313 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 15:46:56.818949 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Sep 4 15:46:56.819806 systemd-logind[1492]: Removed session 26. Sep 4 15:46:57.862087 kubelet[2660]: E0904 15:46:57.862045 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"